Imagine this. You’re chilling out at home, innocently scrolling through social media. After the important tasks of looking at pics of puppies and your neighbour Julie’s all-inclusive Turkish holiday, something emerges from the depths of the Zuckerverse…
Your great-uncle Leslie has shared a scientific journal article that ‘exposes the truth’ against what you thought to be a widely accepted fact among the scientific community. You’ve never been one for conspiracy theories, so open it and give a flick through for a laugh, expecting poor old Lez to have simply misunderstood a few fancy technical words in the ‘conclusion’.
You read it… he’s correct. The crazy claims are there and clear as day, sitting proudly in a journal!
But how??? Aren’t all journals bastions of excellence? Surely, they would never spread fake news? Why haven’t I heard this in the media? Would my doctor agree with Uncle Leslie? What time is my Tesco delivery coming?
Panic sets in. Heart beating out of my chest. Palms are sweaty. Knees weak, arms are heavy. Can I even trust it’s my Mum’s homemade sauce with spaghetti?
Truth is, a lot of not-so-great science can wander its way into journals. Understanding the difference between GUCCI and garbage when reading journal papers is a priceless tool. It’s a tricky skill to learn, even for people who have years of experience in the field.
With all things considered, here’s a handy guide of things to look out for when critically analysing scientific literature.
If the paper you’re reading is investigating the efficacy of a treatment in humans, the golden standard is a double-blind, placebo-controlled randomised trial.
This is hugely important because of the placebo effect; a phenomenon like the dark side of the Force. An ever-present, mysterious power in the brain that can convince your body that you’re receiving a positive effect from a treatment when in reality you’re not. Less robust publications such as case studies and open-label trials have their merits, but have significant potential to be influenced by the placebo effect and other biases, so be wary.
Demographics always show in clinical trial papers, so take a gander at them. Are baseline characteristics similar between treatment groups? One great example is to see whether the mean age of one cohort significantly differs from that of the other. To illustrate why this may be an issue, let’s argue that older adults are more likely to have poorer health, which could potentially have impacts on whatever the trial is investigating.
Age isn’t the only factor that can impact the outcome of a study; regional variations in the patient population can also influence findings in surprising ways. Recent research concerning the anti-parasitic treatment ivermectin for COVID-19.1 Here, the authors mapped the drug’s effectiveness in different regions of the world where the prevalence of a particular parasite known as Strongyloides also varied. The meta-analysis found that ivermectin was effective at reducing the mortality risk in human trials, but only in regions where Strongyloides was endemic. Ivermectin was protecting weakened COVID-19 patients, but solely from the parasitic infection progressing.
APPLICABLE TO ME?
What might work in a test tube or animal model, is in no way guaranteed to work in humans. I’m pretty sure the Death Star’s super laser could get rid of your UTI — but that doesn’t mean it’s a sick new idea for an antibiotic.
For a variety of reasons, articles are more likely to be published where null hypotheses are rejected (i.e. the tested scientific relationship is shown to be real), meaning that there may be a wealth of solid evidence where nothing ‘exciting’ is found (e.g. new drug X has no effect on population Y). It’s a complex topic, and there are ongoing attempts to tackle the problem, but do bear it in mind.
Meta-analyses are types of publications that aim to collect all available quality evidence to ask a question (e.g. ‘does drug X actually work?’). Frustratingly, these still can be suspiciously selective. The good news is there are some total aces out there who are revered for the quality of their meta-analyses, e.g. Cochrane reviews.2
CONFLICTS OF INTEREST
Let’s pretend that the article great-uncle Leslie shared was conclusive proof that eating Dunkin’ Donuts would make your face turn blue. If the lead author of that paper was a major stakeholder for Krispy Kreme, wouldn’t that send alarm bells ringing?
The same goes for real scientific research. A good-quality journal should always include a section for authors’ conflicts of interests. Such declarations are not necessarily bad things – however it is important to understand their context. However, if you can dig around and find juicy info that hasn’t been publicised, then oh BOY does that look shady.
Stats are mega important in science. Make sure that any relevant claims are backed up with statistically significant p-values, which indicates the likelihood of an outcome occurring by chance.
For those that want to get proper nerdy, have a look at whether the correct statistical tests and methods have been used for the data in question. T-tests, chi-square, Bonferroni correction, ANOVA… oh gosh I’m getting flustered just thinking about them all.
CONCRETE OR YEET?
Sometimes if it’s truly pants, a journal article can be ‘retracted’, meaning it is no longer considered part of the journal it had initially been published in. Even then, it may lurk about on the web, a bit like your other neighbour Debbie who couldn’t afford a Turkish villa, so instead spreads gossip to Julie’s ex-husband that she could only afford it after purposely crashing her car into that nice old lady and framing her for compensation.
Back to the issue. Have a nosey about online — databases like PubMed and websites (e.g. www.retractionwatch.com) can tell you if something has been yeeted out of a journal, and quite often why.
If you want to get super stuck in and make sure you cover all bases, then it’s your lucky day! There are some hugely detailed checklists available online by the Cochrane organisation to work out the risk of bias in different types of studies.3 Other, less detailed versions are available elsewhere online which cover the main principles.
SEE IF SOMEBODY DID THE HARD WORK ALREADY
Wearing both a deerstalker hat and a lab coat can be tiresome stuff. All that reading and being clever malarky is so much effort, but Uncle Lez just won’t give it a rest. If only there was a way for someone else to do the work for me…
Well guess what?
Published papers that go against the narrative can often get a lot of attention – this naturally invites scrutiny from across the scientific community, which in turn sometimes reveals pretty shocking findings, so look out for them. Sites like Google Scholar can show you if such criticisms have formed part of an entirely different publication.
The great thing about letting someone else do the detective work is that you can be led to scene of the crime, where you can check the deets for yourself.
PREDATORY JOURNALS & PUBLISHERS
Predatory publishers have been defined as:
“…entities that prioritize self-interest at the expense of scholarship and are characterized by false or misleading information, deviation from best editorial and publication practices, a lack of transparency, and/or the use of aggressive and indiscriminate solicitation practices”4Predatory publishers can dupe all of us – including scientists. One study found that out of 46,000 Italian researchers, 5% had published in such journals.5
It’s a difficult job to know who is friend or foe – there are countless ways to be deceived by them. Honestly, this topic deserves its own blog post (hint hint).
The important thing to take home is that there are blacklists/whitelists from a handful of organisations that can help you ascertain who’s who. So be cautious, and have a good ol’ nosey about on the web.
Summing everything up, know that it’s totally OK to have a difference of opinion with your Uncle Leslie. However, remember that spreading misinformation can be harmful. So, keep an open mind, keep it objective, and keep it honest. Appraise all the data before reaching conclusions, just like Julie did when choosing the perfect Turkish holibob getaway villa.
1. Bitterman A et al. Comparison of Trials Using Ivermectin for COVID-19 Between Regions With High and Low Prevalence of Strongyloidiasis. Jama Netw Open. 2022;5(3):e223079. doi: 10.1001/jamanetworkopen.2022.3079
2. Cochrane. Cochrane Library. Available at: https://www.cochranelibrary.com/ [Accessed 19th August 2022]
3. University of Bristol. Risk of bias assessment tools. Available at: https://www.riskofbias.info [Accessed 19th August 2022]
4. Grudniewicz A et al. Predatory journals: no definition, no defence. Nature. 2019;576(7786):210-212. doi: 10.1038/d41586-019-03759-y
5. Bagues M, Sylos-Labini M & Zinovyeva N. A walk on the wild side: ‘Predatory’ journals and information asymmetries in scientific evaluations. Research Policy. 2019;48(2):462-477