The free Article Alert service delivers a weekly email to your inbox containing the most recently published articles on all aspects of systematic review and comparative effectiveness review methodologies.
- Medical, psychological, educational, etc., methodology research literatures covered
- Curated by our seasoned research staff from a wide array of sources: PubMed, journal table of contents, author alerts, bibliographies, and prominent international methodology and grey literature Web sites
- Averages 20 citations/week (pertinent citations screened from more than 1,500 citations weekly)
- Saves you time AND keeps you up to date on the latest research
Article Alert records include:
- Citation information/abstract
- Links: PMID (PubMed ID) and DOI (Digital Object Identifier)
- Free Full Text: PubMed Central or publisher link (when available)
- RIS file to upload all citations to EndNote, RefWorks, Zotero, or other citation software
To sign up for free email updates of Article Alert, contact the Scientific Center Resource Library at firstname.lastname@example.org.
The Article Alert for the week of June 29, 2015 (sample articles)
Kamdar BB, Shah PA, Sakamuri S, Kamdar BS, Oh J. A Novel Search Builder to Expedite Search Strategies for Systematic Reviews. Int.J.Technol.Assess.Health Care. Epub 2015 May 20. PMID: 25989817.
Objectives: Developing a search strategy for use in a systematic review is a time-consuming process requiring construction of detailed search strings using complicated syntax, followed by iterative fine-tuning and trial-and-error testing of these strings in online biomedical search engines.
Methods: Building upon limitations of existing online-only search builders, a user-friendly computer-based tool was created to expedite search strategy development as part of production of a systematic review.
Results: Search Builder 1.0 is a Microsoft Excel®-based tool that automatically assembles search strategy text strings for PubMed (www.pubmed.com) and Embase (www.embase.com ), based on a list of user-defined search terms and preferences. With the click of a button, Search Builder 1.0 automatically populates the syntax needed for functional search strings, and copies the string to the clipboard for pasting into Pubmed or Embase. The offline file-based interface of Search Builder 1.0 also allows for searches to be easily shared and saved for future reference.
Conclusions: This novel, user-friendly tool can save considerable time and streamline a cumbersome step in the systematic review process.
- DOI: http://dx.doi.org/10.1017/S0266462315000136
- PubMed: http://www.ncbi.nlm.nih.gov/pubmed/25989817
Rathbone J, Hoffmann T, Glasziou P. Faster title and abstract screening? Evaluating Abstrackr, a semi-automated online screening program for systematic reviewers. Syst.Rev. 2015 Jun 15;4(1):80. PMID: 26073974.
Background: Citation screening is time consuming and inefficient. We sought to evaluate the performance of Abstrackr, a semi-automated online tool for predictive title and abstract screening.
Methods: Four systematic reviews (aHUS, dietary fibre, ECHO, rituximab) were used to evaluate Abstrackr. Citations from electronic searches of biomedical databases were imported into Abstrackr, and titles and abstracts were screened and included or excluded according to the entry criteria. This process was continued until Abstrackr predicted and classified the remaining unscreened citations as relevant or irrelevant. These classification predictions were checked for accuracy against the original review decisions. Sensitivity analyses were performed to assess the effects of including case reports in the aHUS dataset whilst screening and the effects of using larger imbalanced datasets with the ECHO dataset. The performance of Abstrackr was calculated according to the number of relevant studies missed, the workload saving, the false negative rate, and the precision of the algorithm to correctly predict relevant studies for inclusion, i.e. further full text inspection.
Results: Of the unscreened citations, Abstrackr's prediction algorithm correctly identified all relevant citations for the rituximab and dietary fibre reviews. However, one relevant citation in both the aHUS and ECHO reviews was incorrectly predicted as not relevant. The workload saving achieved with Abstrackr varied depending on the complexity and size of the reviews (9% rituximab, 40% dietary fibre, 67% aHUS, and 57% ECHO). The proportion of citations predicted as relevant, and therefore, warranting further full text inspection (i.e. the precision of the prediction) ranged from 16% (aHUS) to 45% (rituximab) and was affected by the complexity of the reviews. The false negative rate ranged from 2.4 to 21.7%. Sensitivity analysis performed on the aHUS dataset increased the precision from 16 to 25% and increased the workload saving by 10% but increased the number of relevant studies missed. Sensitivity analysis performed with the larger ECHO dataset increased the workload saving (80%) but reduced the precision (6.8%) and increased the number of missed citations.
Conclusions: Semi-automated title and abstract screening with Abstrackr has the potential to save time and reduce research waste.
- FREE FULL TEXT: http://www.ncbi.nlm.nih.gov/pmc/articles/PMC4472176/pdf/13643_2015_Article_67.pdf
- DOI: http://dx.doi.org/10.1186/s13643-015-0067-6
- PubMed: http://www.ncbi.nlm.nih.gov/pubmed/26073974
- PubMed Central: http://www.ncbi.nlm.nih.gov/pmc/articles/PMC4472176/
Shao W, Adams CE, Cohen AM, Davis JM, McDonagh MS, Thakurta S, Yu PS, Smalheiser NR. Aggregator: a machine learning approach to identifying MEDLINE articles that derive from the same underlying clinical trial. Methods. 2015 Mar;74:65-70. PMID: 25461812.
Objective: It is important to identify separate publications that report outcomes from the same underlying clinical trial, in order to avoid over-counting these as independent pieces of evidence.
Methods: We created positive and negative training sets (comprised of pairs of articles reporting on the same condition and intervention) that were, or were not, linked to the same clinicaltrials.gov trial registry number. Features were extracted from MEDLINE and PubMed metadata; pairwise similarity scores were modeled using logistic regression.
Results: Article pairs from the same trial were identified with high accuracy (F1 score=0.843). We also created a clustering tool, Aggregator, that takes as input a PubMed user query for RCTs on a given topic, and returns article clusters predicted to arise from the same clinical trial.
Discussion: Although painstaking examination of full-text may be needed to be conclusive, metadata are surprisingly accurate in predicting when two articles derive from the same underlying clinical trial. Copyright © 2014 Elsevier, Inc. All rights reserved.