Skip Navigation
Department of Health and Human Services www.hhs.gov
 
 

Article Alert

The free Article Alert service delivers a weekly email to your inbox containing the most recently published articles on all aspects of systematic review and comparative effectiveness review methodologies.

  • Medical, psychological, educational, etc., methodology research literatures covered
  • Curated by our seasoned research staff from a wide array of sources: PubMed, journal table of contents, author alerts, bibliographies, and prominent international methodology and grey literature Web sites
  • Averages 20 citations/week (pertinent citations screened from more than 1,500 citations weekly)
  • Saves you time AND keeps you up to date on the latest research


Article Alert records include:

  • Citation information/abstract
  • Links: PMID (PubMed ID) and DOI (Digital Object Identifier)
  • Free Full Text: PubMed Central or  publisher link (when available)
  • RIS file to upload all citations to EndNote, RefWorks, Zotero, or other citation software

To sign up for free email updates of Article Alert, contact the Scientific Center Resource Library at methods@epc-src.org.

 

The Article Alert for the week of August 25, 2014 (sample articles)

Pieper D, Mathes T, Eikermann M. The impact of choice of quality appraisal tool for systematic reviews in overviews. J.Evid.Based Med. 2014 May;7(2):72-8

OBJECTIVE: The question whether the choice of a critical appraisal tool has an impact on the result of the evidence synthesis in systematic reviews has been neglected by research. This is also true for psychometric properties of critical appraisal tools. The objective of the study is to exemplify that in the context of overviews (reviews of reviews).
METHODS: Based on a published overview investigating the hospital volume-outcome relationship in surgery, 32 therein included systematic reviews were independently evaluated with four critical appraisal tools by two reviewers. We rated the relationship on a five-point rating scale using qualitative evidence synthesis. Measures of reliability and correlation coefficients were calculated.
RESULTS: The result of the evidence synthesis was not dependent on the choice of a critical appraisal tool. Interrater reliability differed depending on the tool, Cohens Kappa ranging from 0.47 to 0.76. There was a high heterogeneity between the two pairs of reviewers.
CONCLUSION: The choice of a critical appraisal tool has no impact on the result of the evidence synthesis, despite differences in the covered components by each CAT. Further studies should concentrate on investigating psychometric properties and the impact of choice of CATs on the evidence synthesis in other contexts. The high heterogeneity between the two pairs of reviewers, all of them experienced in appraising systematic reviews, indicates a degree of interpretability in the items.
This article is protected by copyright. All rights reserved.

 

Robinson KA, Whitlock EP, Oneil ME, Anderson JK, Hartling L, Dryden DM, Butler M, Newberry SJ, McPheeters M, Berkman ND, et al. Integration of existing systematic reviews into new reviews: identification of guidance needs. Syst.Rev. 2014 Jun 23;3(1):60. PMID: 24956937.

BACKGROUND: An exponential increase in the number of systematic reviews published, and constrained resources for new reviews, means that there is an urgent need for guidance on explicitly and transparently integrating existing reviews into new systematic reviews. The objectives of this paper are: 1) to identify areas where existing guidance may be adopted or adapted, and 2) to suggest areas for future guidance development.
METHODS: We searched documents and websites from healthcare focused systematic review organizations to identify and, where available, to summarize relevant guidance on the use of existing systematic reviews. We conducted informational interviews with members of Evidence-based Practice Centers (EPCs) to gather experiences in integrating existing systematic reviews, including common issues and challenges, as well as potential solutions.
RESULTS: There was consensus among systematic review organizations and the EPCs about some aspects of incorporating existing systematic reviews into new reviews. Current guidance may be used in assessing the relevance of prior reviews and in scanning references of prior reviews to identify studies for a new review. However, areas of challenge remain. Areas in need of guidance include how to synthesize, grade the strength of, and present bodies of evidence composed of primary studies and existing systematic reviews. For instance, empiric evidence is needed regarding how to quality check data abstraction and when and how to use study-level risk of bias assessments from prior reviews.
CONCLUSIONS: There remain areas of uncertainty for how to integrate existing systematic reviews into new reviews. Methods research and consensus processes among systematic review organizations are needed to develop guidance to address these challenges.

 

Tang LL, Caudy M, Taxman F. A statistical method for synthesizing meta-analyses. Comput.Math.Methods Med. 2013;2013:732989. PMID: 24194787.

Multiple meta-analyses may use similar search criteria and focus on the same topic of interest, but they may yield different or sometimes discordant results. The lack of statistical methods for synthesizing these findings makes it challenging to properly interpret the results from multiple meta-analyses, especially when their results are conflicting. In this paper, we first introduce a method to synthesize the meta-analytic results when multiple meta-analyses use the same type of summary effect estimates. When meta-analyses use different types of effect sizes, the meta-analysis results cannot be directly combined. We propose a two-step frequentist procedure to first convert the effect size estimates to the same metric and then summarize them with a weighted mean estimate. Our proposed method offers several advantages over existing methods by Hemming et al. (2012). First, different types of summary effect sizes are considered. Second, our method provides the same overall effect size as conducting a meta-analysis on all individual studies from multiple meta-analyses. We illustrate the application of the proposed methods in two examples and discuss their implications for the field of meta-analysis.