Skip Navigation
Department of Health and Human Services www.hhs.gov
 
 

Article Alert

The free Article Alert service delivers a weekly email to your inbox containing the most recently published articles on all aspects of systematic review and comparative effectiveness review methodologies.

  • Medical, psychological, educational, etc., methodology research literatures covered
  • Curated by our seasoned research staff from a wide array of sources: PubMed, journal table of contents, author alerts, bibliographies, and prominent international methodology and grey literature Web sites
  • Averages 20 citations/week (pertinent citations screened from more than 1,500 citations weekly)
  • Saves you time AND keeps you up to date on the latest research


Article Alert records include:

  • Citation information/abstract
  • Links: PMID (PubMed ID) and DOI (Digital Object Identifier)
  • Free Full Text: PubMed Central or  publisher link (when available)
  • RIS file to upload all citations to EndNote, RefWorks, Zotero, or other citation software

To sign up for free email updates of Article Alert, contact the Scientific Center Resource Library at methods@epc-src.org.

 

The Article Alert for the week of May 11, 2015 (sample articles)

Kato EU, Hartling L, Guise JM. Methods And Context For The Production Of Rapid Reviews. Methods And Context For The Production Of Rapid Reviews. Proceedings of the ISPOR 20th Annual International Meeting Research Abstracts; 2015 May 16-20; Philadelphia, PA. Value Health. 2015 ;18(3):A36.

Systematic reviews (SRs) are critically important to support decision making in health care. Interest in reliable and quick evidence synthesis has sparked development of "rapid reviews" yet no clear consensus exists on what these are or what processes they use. The goal of this project was to understand and describe practices of conducting rapid reviews.

 

Riley RD, Ahmed I, Debray TP, Willis BH, Noordzij JP, Higgins JP, Deeks JJ. Summarising and validating test accuracy results across multiple studies for use in clinical practice. Stat.Med. 2015 Jun 15;34(13):2081-103. PMID: 25800943.

Following a meta-analysis of test accuracy studies, the translation of summary results into clinical practice is potentially problematic. The sensitivity, specificity and positive (PPV) and negative (NPV) predictive values of a test may differ substantially from the average meta-analysis findings, because of heterogeneity. Clinicians thus need more guidance: given the meta-analysis, is a test likely to be useful in new populations, and if so, how should test results inform the probability of existing disease (for a diagnostic test) or future adverse outcome (for a prognostic test)? We propose ways to address this. Firstly, following a meta-analysis, we suggest deriving prediction intervals and probability statements about the potential accuracy of a test in a new population. Secondly, we suggest strategies on how clinicians should derive post-test probabilities (PPV and NPV) in a new population based on existing meta-analysis results and propose a cross-validation approach for examining and comparing their calibration performance. Application is made to two clinical examples. In the first example, the joint probability that both sensitivity and specificity will be >80% in a new population is just 0.19, because of a low sensitivity. However, the summary PPV of 0.97 is high and calibrates well in new populations, with a probability of 0.78 that the true PPV will be at least 0.95. In the second example, post-test probabilities calibrate better when tailored to the prevalence in the new population, with cross-validation revealing a probability of 0.97 that the observed NPV will be within 10% of the predicted NPV.

 

Selph SS, Ginsburg AD, Chou R. Impact of contacting study authors to obtain additional data for systematic reviews: diagnostic accuracy studies for hepatic fibrosis. Syst.Rev. 2014 Sep 19;3:107. PMID: 25239493.

Background: Seventeen of 172 included studies in a recent systematic review of blood tests for hepatic fibrosis or cirrhosis reported diagnostic accuracy results discordant from 2 × 2 tables, and 60 studies reported inadequate data to construct 2 × 2 tables. This study explores the yield of contacting authors of diagnostic accuracy studies and impact on the systematic review findings.
Methods: Sixty-six corresponding authors were sent letters requesting additional information or clarification of data from 77 studies. Data received from the authors were synthesized with data included in the previous review, and diagnostic accuracy sensitivities, specificities, and positive and likelihood ratios were recalculated.
Results: Of the 66 authors, 68% were successfully contacted and 42% provided additional data for 29 out of 77 studies (38%). All authors who provided data at all did so by the third emailed request (ten authors provided data after one request). Authors of more recent studies were more likely to be located and provide data compared to authors of older studies. The effects of requests for additional data on the conclusions regarding the utility of blood tests to identify patients with clinically significant fibrosis or cirrhosis were generally small for ten out of 12 tests. Additional data resulted in reclassification (using median likelihood ratio estimates) from less useful to moderately useful or vice versa for the remaining two blood tests and enabled the calculation of an estimate for a third blood test for which previously the data had been insufficient to do so. We did not identify a clear pattern for the directional impact of additional data on estimates of diagnostic accuracy.
Conclusions: We successfully contacted and received results from 42% of authors who provided data for 38% of included studies. Contacting authors of studies evaluating the diagnostic accuracy of serum biomarkers for hepatic fibrosis and cirrhosis in hepatitis C patients impacted conclusions regarding diagnostic utility for two blood tests and enabled the calculation of an estimate for a third blood test. Despite relatively extensive efforts, we were unable to obtain data to resolve discrepancies or complete 2 × 2 tables for 62% of studies.