META-ANALYSIS TRUMPS FAKE NEWS
Updated: Oct 11
Meta-analysis: a tool to avoid drowning in a sea of information, or, ‘should you be trying to get your hands on some hydroxychloroquine?’
If you’ve been reading any coronavirus dominated news over the last wee while, you would have heard various advocates and naysayers for the anti-coronavirus properties of the drugs chloroquine and its less toxic derivative hydroxychloroquine (as most focus has been on hydroxychloroquine, we’ll lump them together and call them HCQ here). Traditionally used as anti-malarial and lupus treatment, HCQ came into focus in the context of COVID-19 after a cell-culture study published in February this year suggested the drug showed anti-corona properties. This was followed by a controversial French study that purported to show a significant reduction in viral load for patients treated with HCQ. The original cell culture study and small French trial have been followed up by trials across the world, with mixed results, and much political and media hype. Is the drug a godsend or a deadly distraction from research into other potential corona cures? If only there was an approach that could be used to synthesis results across studies in a methodical way, help us get a general answer to the question of how effective HCQ is for treating coronavirus, and lead us toward better evidence based public health policy …oh wait, there is such approach that can help us! It’s called meta-analysis.
In this blog I’ll provide a brief explainer of what a meta-analysis is, how it can help us disentangle conflicting results from a suite of studies and provide some neat examples from medical and ecological topics. You’ll also get some clearer evidence to help you decide if you should follow the lead of some of our more ‘charismatic’ leaders, and scramble to acquire your own stash of HCQ.
The term meta-analysis was first coined in 1976, began becoming widely implemented in medical research from the 1990s, and in the last couple of decades have become more widely implemented in other fields too, including those close to our heart at DEEP (ecology and evolution!).
As the name suggests, a meta-analysis is essentially a statistical analysis of other analyses. In a meta-analysis, we search for a set of similarly designed experiments on a particular topic (our search for studies should be done systematically, so we don’t accidentally pick a biased subset of studies on the topic), and combine the data from them and analyse it as a whole. The power of the approach is clear; rather than trying to understand studies in isolation, we look at the entire body of evidence. For example, if we’re looking at the treatment effect (or effect size) of a drug vs. placebo, this procedure allows us to report how consistent the effect is across different populations, and to estimate the magnitude of effect more precisely than we could with any single study alone. If the effect size varies across studies, we can also investigate if any factors explain the variation (for example, across studies we might find that the drug is effective for young people, but not so much for old people, or we might find that the effect differs by sex). Another key feature of meta-analyses is that studies that are of lower quality or have limited data are given less weight in determining the overall effect size. All the above are difficult or impossible to do with single experiments or traditional reviews.
In our HCQ example, there has been a raft of recent clinical trials and observational studies to assess the effectiveness of HCQ as a treatment for patients with COVID-19.
These studies have been synthesised in recent meta-analyses (there are already several published meta-analyses to date, although not all have been peer reviewed. You could in fact do a meta-analysis of these meta-analyses!). One of these serves as a useful way to explain some the key features of a meta-analysis. In this preliminary meta-analysis by Chacko et al., 6 studies were assessed for the efficacy of HCQ in reducing patient mortality due to COVID-19. The key to understanding the result of any meta-analysis is a ‘forest plot’, and the below forest plot summarises the meta-analysis’ findings. The first 6 rows represent the 6 individual studies included in the meta-analysis. For each, the study’s author and year of publication is shown at left. Next to each study, its estimated effect size and 95% confidence interval (the range of values that we are 95% confident the true effect size lies in) are displayed schematically. At right, a series of numbers are given: the effect size estimate and its lower and upper 95% confidence limits. The effect size is an odds ratio (OR). Here, an odds ratio greater than 1 corresponds to lower chance of mortality for patients in the control group, less than 1 corresponds to lower chance of mortality for patients given HCQ. An odds ratio of 1 (the solid vertical line) represents equal chance of mortality in both groups. If a given study’s confidence interval spans 1, we cannot statistically distinguish mortality rate between control patients and those given HCQ. Each study’s total weight in the overall meta-analysis is given at the far right, and also schematically represented by the size of the square (note that studies with narrow confidence intervals, or where the effect size is more precisely known, have more weight in the meta-analysis).
In the last row, the overall effect size is given (the diamond shape). It is also represented by the dashed vertical line. It is slightly greater than 1, meaning that mortality is actually reduced for control patients compared to those given HCQ. Given that the confidence interval for the overall effect size spans 1, however, we cannot statistically distinguish mortality rate between control patients and those given HCQ.
From this plot then, we can say that there is very limited evidence to support using HCQ to prevent mortality in COVID-19 patients.
For two of the studies, no difference is seen between patients, three support the control over HCQ, and only one study supports HCQ for reducing mortality. A large randomised control trial by Oxford University has since been published, involving 11,000 patients, and it came to the same conclusion; HCQ is not effective for preventing mortality in HCQ patients. It seems like HCQ has had its 15 minutes of fame as a potential COVID-19 treatment. Considering well known side effects of HCQ, perhaps don’t stress too much about stashing up.
A little closer to home, and the kind of research we do at DEEP, meta-analyses can also be extremely useful for synthesising studies in ecology and evolution, with outcomes that can inform not only theory, but conservation practice. For example, a global meta-analysis of 221 forest restoration success studies showed that forest restoration enhances biodiversity by 15–84% and vegetation structure by 36–77%, compared with degraded ecosystems. While we should avoid deforestation where possible to avoid negative biodiversity outcomes (note that biodiversity in restored forests did not return to the level of those in pristine, undisturbed old growth forests), this study clearly shows that restoring degraded areas can be beneficial for biodiversity. A sister paper found that in tropical forests, natural restoration surpasses active restoration. Enhancement of biodiversity and vegetation structure was 34-56% and 19-56% higher respectively in natural regeneration than in active restoration systems This evidence countered the prevailing view that active restoration should be the preferred approach for accelerating recovery of biodiversity and vegetation structure in tropical regions. Such a novel insight would be hard to achieve through informally looking through a disparate set of experimental studies.
While meta-analyses can offer powerful insights, they are not without drawbacks and not a shortcut to a full understanding of reality. As put in a recent comment piece in Nature, meta-analyses and systematic reviews are “statistical and scientific techniques, not magical ones”.
Consider a hypothetical meta-analysis, where a researcher is interested in how diet impacts body weight. One set of studies involve diets where patients are given a high calorie diet, and in a second set of studies patients are given a low calorie diet. The researcher pools these studies together, analyses them, and concludes that while some patients gain weight and others lose weight, overall, there is no net effect of diet on body weight. By neglecting to extract key information from these studies (whether the diet was calorie rich or poor) and include it in their meta-analytic model, they have made a serious error in their interpretation of their meta-analytic results. This example, while contrived, can and does occur in real life meta-analyses. Meta-analysists must carefully consider potential factors that might explain between study variation in effects. Just like in any normal experimental study, a little common sense goes a long way in a meta-analysis.
Overall, the above should highlight that well thought out meta-analyses are an integral part of the research process. When an interesting or exciting new study comes out, the results can be oversold (by the researchers, the media, and as we have recently seen, politicians), but through formally assessing the broader body of evidence, meta-analyses can help cut through the hyperbole. They can quantify what is known, explicitly highlight research gaps, and together with the primary research they synthesize, maximize the effectiveness of scientific inquiry for progress in knowledge, policy, and medical and conservation practice.
All posts are personal reflections of the blog-post author and do not necessarily reflect the views of all other DEEP members