People using assistive technology may not be able to fully access information in these files. For additional assistance, please contact us.
To remain useful, comparative effectiveness reviews (CERs) and other systematic reviews require periodic updating. Although several studies have been conducted assessing when and how to update, no research has been conducted on optimal formats for presenting the results to users. The aim of the present study was to gather the input of various users of CERS regarding the usability of a range of formatting methods for showing the changes from the original to the update report.
Using the executive summaries of a comparative effectiveness review our Evidence-based Practice Center conducted in 2001 and the update review we conducted in 2008, we initially created five different versions of the update summary. Each succeeding version used a different format to show changes from the original to the update report (e.g., new and retired Key Questions, changes in search strategies and inclusion/exclusion criteria) and changes in the findings. To test the five differently formatted summaries, we identified several categories of users of CERs, convened an informal virtual focus group comprising various users, and asked them to evaluate the summaries on several dimensions, first via an email questionnaire and then in a group conference call where we presented the results of the questionnaire. Based on group feedback, we created two additional versions and tested them in a second focus group and among a third small group. The rationales for the selection of formats were two-fold: to imitate, and thus evaluate, the formats used by several organizations whose role is to conduct systematic reviews and updates and to create and test novel formats in response to users' suggestions.
Policymakers who rely on CERs and other systematic reviews as the basis for policy (including health insurance companies, health care organizations, research funders, and guideline makers) expressed the need to see changes in review process as well as outcomes clearly marked, (with changes in outcomes and conclusions preferably shown in graphic form), while at the same time having access to the entire set of data and the analyses on which the conclusions were based. The small group of clinicians preferred to see the skeleton of the report (Key Questions, conceptual framework, inclusion/exclusion criteria) as well as the outcomes and conclusions presented entirely in graphic form for ease of reading.
Different users of CERs clearly have different information needs. Yet whereas policymakers need access to the entire data set and analyses that comprise a systematic review (the original and the update), all users benefit from summaries that clearly show what changed in as succinct a format as possible, preferable in graphic form.