Skip Navigation
Department of Health and Human Services www.hhs.gov
 
 

Article Alert

The free Article Alert service delivers a weekly email to your inbox containing the most recently published articles on all aspects of systematic review and comparative effectiveness review methodologies.

  • Medical, psychological, educational, etc., methodology research literatures covered
  • Curated by our seasoned research staff from a wide array of sources: PubMed, journal table of contents, author alerts, bibliographies, and prominent international methodology and grey literature Web sites
  • Averages 20 citations/week (pertinent citations screened from more than 1,500 citations weekly)
  • Saves you time AND keeps you up to date on the latest research


Article Alert records include:

  • Citation information/abstract
  • Links: PMID (PubMed ID) and DOI (Digital Object Identifier)
  • Free Full Text: PubMed Central or  publisher link (when available)
  • RIS file to upload all citations to EndNote, RefWorks, Zotero, or other citation software

To sign up for free email updates of Article Alert, contact the Scientific Center Resource Library at methods@epc-src.org.

 

The Article Alert for the week of July 20, 2015 (sample articles)

Seehra J, Pandis N, Koletsi D, Fleming P. Use of quality assessment tools in systematic reviews was varied and inconsistent. J.Clin.Epidemiol. Epub 2015 Jul 4. PMID: 26151664.

Objective: To assess the use of quality assessment tools among a cross-section of systematic reviews (SRs) and to further evaluate whether quality was used as a parameter in the decision to include primary studies within subsequent meta-analysis.
Study Design and Setting: We searched PubMed for systematic reviews (interventional, observational and diagnostic) published in Core Clinical Journals between January 1st and March 31st, 2014.
Results: 310 systematic reviews were identified. Quality assessment was undertaken in 223 (71.9%) with isolated use of the Cochrane risk of bias tool (26%, n= 58) and the Newcastle Ottawa Scale (15.3%, n= 34) most common. A threshold level of primary study quality for subsequent meta-analysis was used in 13.2% (41/310) of reviews. Overall, fifty-four combinations of quality assessment tools were identified with a similar preponderance of tools used among observational and interventional reviews. Multiple tools were used in 11.6% (n= 36) of SRs overall.
Conclusion: We found that quality assessment tools were used in a majority of SRs; however, a threshold level of quality for meta-analysis was stipulated in just 13.2% (n= 41). This cross-sectional analysis provides further evidence of the need for more active or intuitive editorial processes to enhance the reporting of systematic reviews.
Copyright © 2015 Elsevier Inc. All rights reserved.

 

Cole GD, Shun-Shin MJ, Nowbar AN, Buell KG, Al-Mayahi F, Zargaran D, Mahmood S, Singh B, Mielewczik M, Francis DP. Difficulty in detecting discrepancies in a clinical trial report: 260-reader evaluation. Int.J.Epidemiol. Epub 2015 Jul 13. PMID: 26174517.

Background: Scientific literature can contain errors. Discrepancies, defined as two or more statements or results that cannot both be true, may be a signal of problems with a trial report. In this study, we report how many discrepancies are detected by a large panel of readers examining a trial report containing a large number of discrepancies.
Methods: We approached a convenience sample of 343 journal readers in seven countries, and invited them in person to participate in a study. They were asked to examine the tables and figures of one published article for discrepancies. 260 participants agreed, ranging from medical students to professors. The discrepancies they identified were tabulated and counted. There were 39 different discrepancies identified. We evaluated the probability of discrepancy identification, and whether more time spent or greater participant experience as academic authors improved the ability to detect discrepancies.
Results: Overall, 95.3% of discrepancies were missed. Most participants (62%) were unable to find any discrepancies. Only 11.5% noticed more than 10% of the discrepancies. More discrepancies were noted by participants who spent more time on the task (Spearman's ρ = 0.22, P < 0.01), and those with more experience of publishing papers (Spearman's ρ = 0.13 with number of publications, P = 0.04).
Conclusions: Noticing discrepancies is difficult. Most readers miss most discrepancies even when asked specifically to look for them. The probability of a discrepancy evading an individual sensitized reader is 95%, making it important that, when problems are identified after publication, readers are able to communicate with each other. When made aware of discrepancies, the majority of readers support editorial action to correct the scientific record.
© The Author 2015; Published by Oxford University Press on behalf of the International Epidemiological Association.

 

Guan M, Vandekerckhove J. A Bayesian approach to mitigation of publication bias. Psychon.Bull.Rev. Epub 2015 Jul 1. PMID: 26126776.

The reliability of published research findings in psychology has been a topic of rising concern. Publication bias, or treating positive findings differently from negative findings, is a contributing factor to this "crisis of confidence," in that it likely inflates the number of false-positive effects in the literature. We demonstrate a Bayesian model averaging approach that takes into account the possibility of publication bias and allows for a better estimate of true underlying effect size. Accounting for the possibility of bias leads to a more conservative interpretation of published studies as well as meta-analyses. We provide mathematical details of the method and examples.