Skip Navigation
AHRQ--Agency for Healthcare Research and Quality: Advancing Excellence in Health Care
 
 

Article Alert

The free Article Alert service delivers a weekly email to your inbox containing the most recently published articles on all aspects of systematic review and comparative effectiveness review methodologies.

  • Medical, psychological, educational, etc., methodology research literatures covered
  • Curated by our seasoned research staff from a wide array of sources: PubMed, journal table of contents, author alerts, bibliographies, and prominent international methodology and grey literature Web sites
  • Averages 20 citations/week (pertinent citations screened from more than 1,500 citations weekly)
  • Saves you time AND keeps you up to date on the latest research


Article Alert records include:

  • Citation information/abstract
  • Links: PMID (PubMed ID) and DOI (Digital Object Identifier)
  • Free Full Text: PubMed Central or  publisher link (when available)
  • RIS file to upload all citations to EndNote, RefWorks, Zotero, or other citation software

To sign up for free email updates of Article Alert, contact the Scientific Resource Center at methods@epc-src.org.

 

The Article Alert for the week of February 1, 2016 (sample articles)

Kadic AJ, Vucic K, Dosenovic S, Sapunar D, Puljak L. Extracting data from figures with software was faster, with higher inter-rater reliability than manual extraction. J.Clin.Epidemiol. Epub 2016 Jan 9. PMID: 26780258.

Objectives: To compare speed and accuracy of graphical data extraction using manual estimation and open source software.
Study Design: Data points from eligible graphs/figures published in randomized controlled trials (RCTs) from 2009-2014 were extracted by two authors independently, both by manual estimation and with the Plot Digitizer, open source software. Corresponding authors of each RCT were contacted up to four times via email to obtain exact numbers that were used to create graphs. Accuracy of each method was compared against the source data from which the original graphs were produced.
Results: Software data extraction was significantly faster, reducing time for extraction for 47%. Percent agreement between the two raters was 51% for manual and 53.5% for software data extraction. Percent agreement between the raters and original data was 66% vs. 75% for the first rater and 69% vs. 73% for the second rater, for manual and software extraction, respectively.
Conclusions: Data extraction from figures should be conducted using software, while manual estimation should be avoided. Using software for data extraction of data presented only in figures is faster and enables higher inter-rater reliability.
Copyright © 2016 Elsevier Inc. All rights reserved.

 

Santesso N, Carrasco-Labra A, Langendam M, Brignardello-Petersen R, Mustafa RA, Heus P, Lasserson T, Opiyo N, Kunnamo I, Sinclair D, et al. Improving GRADE Evidence Tables part 3: Guidance for useful GRADE certainty in the evidence judgments through explanatory footnotes. J.Clin.Epidemiol. Epub 2016 Jan 12. PMID: 26796947.

Background: The Grading of Recommendations Assessment, Development and Evaluation (GRADE) is widely used and reliable and accurate for assessing the certainty in the body of health evidence. The GRADE working group has provided detailed guidance for assessing the certainty in the body of evidence in systematic reviews and health technology assessments (HTA), and how to grade the strength of health recommendations. However, there is limited advice regarding how to maximize transparency of these judgments, in particular through explanatory footnotes or explanations in Summary of Findings tables and Evidence Profiles (GRADE evidence tables).
Methods: We conducted this study to define the essential attributes of useful explanations and to develop specific guidance for explanations associated with GRADE evidence tables. We utilized a sample of explanations according to their complexity, type of judgment involved, and appropriateness from a database of published GRADE evidence tables in Cochrane reviews and World Health Organization (WHO) guidelines. We used an iterative process and group consensus to determine the attributes and develop guidance.
Results: Explanations in GRADE evidence tables should be concise, informative, relevant, easy to understand, and accurate. We provide general and domain-specific guidance to assist authors with achieving these desirable attributes in their explanations associated with GRADE evidence tables.
Conclusions: Adhering to the general and GRADE domain-specific guidance should improve the quality of explanations associated with GRADE evidence tables, assist authors of systematic reviews, health technology assessment (HTA) reports, or guidelines with information that they can use in other parts of their evidence synthesis. This guidance will also support editorial evaluation of evidence syntheses using GRADE and provide a minimum quality standard of judgments across tables.
Copyright © 2016 Elsevier Inc. All rights reserved.

 

Gartlehner G, Dobrescu A, Evans TS, Bann C, Robinson KA, Reston J, Thaler K, Skelly A, Glechner A, Peterson K, et al. The predictive validity of quality of evidence grades for the stability of effect estimates was low: a meta-epidemiological study. J.Clin.Epidemiol. 2016 Feb;70:52-60. PMID: 26342443.

Objective: To determine the predictive validity of the U.S. Evidence-based Practice Center (EPC) approach to GRADE (Grading of Recommendations Assessment, Development and Evaluation).
Study Design and Setting: Based on Cochrane reports with outcomes graded as high quality of evidence (QOE), we prepared 160 documents which represented different levels of QOE. Professional systematic reviewers dually graded the QOE. For each document, we determined whether estimates were concordant with high QOE estimates of the Cochrane reports. We compared the observed proportion of concordant estimates with the expected proportion from an international survey. To determine the predictive validity, we used the Hosmer-Lemeshow test to assess calibration and the C (concordance) index to assess discrimination.
Results: The predictive validity of the EPC approach to GRADE was limited. Estimates graded as high QOE were less likely, estimates graded as low or insufficient QOE more likely to remain stable than expected. The EPC approach to GRADE could not reliably predict the likelihood that individual bodies of evidence remain stable as new evidence becomes available. C-indices ranged between 0.56 (95% CI, 0.47 to 0.66) and 0.58 (95% CI, 0.50 to 0.67) indicating a low discriminatory ability.
Conclusion: The limited predictive validity of the EPC approach to GRADE seems to reflect a mismatch between expected and observed changes in treatment effects as bodies of evidence advance from insufficient to high QOE.
Copyright © 2016 Elsevier Inc. All rights reserved.