- Search for Research Summaries, Reviews, and Reports
- EPC Project
- Findings of Bayesian Mixed Treatment Comparison Meta-Analyses: Comparison and Exploration Using Real-World Trial Data and Simulation
- Research Report Jan. 31, 2013
Other Mixed Treatment Comparison Reports
Abstract - Final – Dec. 21, 2011
An Exploration of Bayesian Mixed Treatment Comparisons Methods
It is generally of interest in a comparative effectiveness review to compare a number of available interventions. Many randomized controlled trials (RCTs) only compare an active drug to placebo or compare to the standard treatment, and rarely is there direct head-to-head evidence on competing interventions. The randomized controlled trial is considered the “gold standard” for comparing drug efficacies and naively comparing treatments across trials is not wise, since the benefits of randomization do not hold. Methods such as adjusted indirect comparisons using a frequentist approach and more recently, Bayesian mixed treatment comparisons (MTC) meta-analysis, have been proposed when direct head-to-head evidence is not available or sufficient.
The overarching objective is to better understand the performance of Bayesian MTC meta-analysis methods. The specific aims are: (1) to compare how Bayesian MTC methods perform for different types of evidence network patterns, (2) to compare Bayesian MTC methods to frequentist indirect methods for various types of outcome measures, (3) to explore how meta-regression can be used with Bayesian MTC meta-analysis to investigate heterogeneity, and (4) for each of the evidence network scenarios, to determine how many equivalent sized studies are needed for each indirect comparison to equal the validity of one (same-sized) direct comparison study.
We will implement Bayesian MTC methods under a variety of different evidence network patterns using data from current or recent systematic reviews and using simulated data. The data will include binary and continuous outcomes. We will explore the robustness of the MTC modeling framework in handling different types of evidence network patterns and answer questions about how types of data structures impact the fitness of the statistics. We plan to compare results from each of these scenarios to those of frequentist indirect methods, under both fixed effects and random effects assumptions. The simulation study will explore how both network patterns and the number of available studies impact the ability of the model to report accurate and precise effect estimates.Return to Top of Page