A new study of e-cigarettes’ efficacy in quitting smoking has not only pitted some of vaping’s most outspoken scientific supporters against among its fiercest academic critics, but in addition illustrates many of the pitfalls facing researchers on the topic and the ones – including policy-makers – who must interpret their work.
The furore has erupted spanning a paper published inside the Lancet Respiratory Medicine and co-authored by Stanton Glantz, director from the Center for Tobacco Control Research and Education in the University of California, San Francisco, along with a former colleague – Sara Kalkhoran, now of Harvard Medical School, who is actually named as first author but fails to enjoy Glantz’s fame (or notoriety) in tobacco control and vaping circles.
Their research sought to compare the success rates in quitting combustible cigarettes of smokers who vape and smokers who don’t: quite simply, to discover whether usage of e-cigs is correlated with success in quitting, which might well imply that vaping helps you stop trying smoking. To do this they performed a meta-analysis of 20 previously published papers. Which is, they didn’t conduct any new research directly on actual smokers or vapers, but instead attempted to blend the outcomes of existing studies to find out if they converge on the likely answer. This is a common and well-accepted approach to extracting truth from statistics in lots of fields, although – as we’ll see – it’s one fraught with challenges.
Their headline finding, promoted by Glantz himself online as well as by the university, is the fact that vapers are 28% less likely to avoid smoking than non-vapers – a conclusion which would claim that vaping is not just ineffective in smoking cessation, but actually counterproductive.
The effect has, predictably, been uproar through the supporters of Ecig Review within the scientific and public health community, especially in Britain. Among the gravest charges are the types levelled by Peter Hajek, the psychologist who directs the Tobacco Dependence Research Unit at Queen Mary University of London, calling the Kalkhoran/Glantz paper “grossly misleading”, and through Carl V. Phillips, scientific director from the pro-vaping Consumer Advocates for Smoke-Free Alternatives Association (CASAA) inside the Usa, who wrote “it is apparent that Glantz was misinterpreting the data willfully, rather than accidentally”.
Robert West, another British psychologist and the director of tobacco studies in a centre run by University College London, said “publication of the study represents an important failure in the peer review system in this particular journal”. Linda Bauld, professor of health policy at the University of Stirling, suggested the “conclusions are tentative and often incorrect”. Ann McNeill, professor of tobacco addiction in the National Addiction Centre at King’s College London, said “this review is not scientific” and added that “the information included about two studies i co-authored is either inaccurate or misleading”.
But what, precisely, are the problems these eminent critics discover in the Kalkhoran/Glantz paper? To answer a few of that question, it’s required to go underneath the sensational 28%, and look at that which was studied and just how.
Meta-analysis is a seductive idea. If (say) you may have 100 separate studies, every one of 1000 individuals, why not combine them to create – essentially – a single study of 100,000 people, the final results from where needs to be significantly less vunerable to any distortions that might have crept into an individual investigation?
(This might happen, as an example, by inadvertently selecting participants with a greater or lesser propensity to stop smoking as a result of some factor not considered from the researchers – a case of “selection bias”.)
Of course, the statistical side of the meta-analysis is quite more sophisticated than simply averaging out the totals, but that’s the typical concept. And even from that simplistic outline, it’s immediately apparent where problems can arise.
Whether its results should be meaningful, the meta-analysis needs to somehow take account of variations in the design of the person studies (they may define “smoking cessation” differently, for example). If it ignores those variations, and tries to shoehorn all results in to a model that a number of them don’t fit, it’s introducing its own distortions.
Moreover, when the studies it’s according to are inherently flawed in any way, the meta-analysis – however painstakingly conducted – will inherit those same flaws.
It is a charge created by the Truth Initiative, a U.S. anti-smoking nonprofit which normally takes an unwelcoming look at e-cigarettes, about a previous Glantz meta-analysis which will come to similar conclusions for the Kalkhoran/Glantz study.
In a submission last year to the United states Food and Drug Administration (FDA), responding to that federal agency’s demand comments on its proposed e-cigarette regulation, the Truth Initiative noted which it had reviewed many studies of e-cigs’ role in cessation and concluded that they were “marred by poor measurement of exposures and unmeasured confounders”. Yet, it said, “many of those have been included in a meta-analysis [Glantz’s] that claims to show that smokers who use e-cigarettes are less likely to stop smoking compared to people who do not. This meta- analysis simply lumps together the errors of inference from the correlations.”
Additionally, it added that “quantitatively synthesizing heterogeneous studies is scientifically inappropriate and also the findings of such meta-analyses are therefore invalid”. Put bluntly, don’t mix apples with oranges and be prepared to receive an apple pie.
Such doubts about meta-analyses are far from rare. Steven L. Bernstein, professor of health policy at Yale, echoed the facts Initiative’s points as he wrote within the Lancet Respiratory Medicine – the identical journal that published this year’s Kalkhoran/Glantz work – the studies contained in their meta-analysis were “mostly observational, often with no control group, with tobacco use status assessed in widely disparate ways” though he added that “this is no fault of [Kalkhoran and Glantz]; abundant, published, methodologically rigorous studies just do not exist yet”.
So a meta-analysis can only be as effective as the research it aggregates, and drawing conclusions from this is simply valid when the studies it’s based on are constructed in similar approaches to the other person – or, a minimum of, if any differences are carefully compensated for. Of course, such drawbacks also apply to meta-analyses which can be favourable to e-cigarettes, like the famous Cochrane Review from late 2014.
Other criticisms from the Kalkhoran/Glantz work exceed the drawbacks of meta-analyses in general, and focus on the specific questions caused from the San Francisco researchers and also the ways they made an effort to answer them.
One frequently-expressed concern continues to be that Kalkhoran and Glantz were studying a bad people, skewing their analysis by not accurately reflecting the true quantity of e-cig-assisted quitters.
As CASAA’s Phillips points out, the e-cigarette users within the two scholars’ number-crunching were all current smokers who had already tried e-cigarettes once the studies on the quit attempts started. Thus, the research by its nature excluded people who had started vaping and quickly abandoned smoking; if such people appear in large numbers, counting them would have made e-cigarettes seem a more successful way to smoking cessation.
An alternative question was raised by Yale’s Bernstein, who observed that not all vapers who smoke are attempting to stop trying combustibles. Naturally, those that aren’t wanting to quit won’t quit, and Bernstein observed that whenever these individuals kndnkt excluded from the data, it suggested “no effect of e-cigarettes, not really that e-cigarette users were more unlikely to quit”.
Excluding some who did find a way to quit – and after that including individuals who have no aim of quitting anyway – would most likely seem to impact the outcome of a report purporting to measure successful quit attempts, even though Kalkhoran and Glantz argue that their “conclusion was insensitive to an array of study design factors, including if the study population consisted only of smokers considering quitting smoking, or all smokers”.
But there is also a further slightly cloudy area which affects much science – not simply meta-analyses, and not just these types of researchers’ work – and, importantly, is often overlooked in media reporting, in addition to by institutions’ pr departments.