Issues in Meta-Analysis
Simcha Pollack, Department of Computer Information Systems Decision Sciences, The Peter J. Tobin College of Business, Michael Borenstein, Director, Biostat, Teaneck, NJ
Meta-analyses are conducted for a variety of reasons, not only to synthesize evidence on the effects of interventions but to support evidence-based policy or practice. The purpose of performing a meta-analysis, or more generally, the purpose of any research synthesis has implications for when it should be performed (i.e., after how many studies were done), what model should be used to analyze the data, what sensitivity analyses should be undertaken, and how the results should be interpreted. Losing sight of the fact that meta-analysis is a tool with multiple applications causes confusion and leads to pointless discussions about what is the right way to perform a research synthesis, when there is no single right way. It all depends on the purpose of the synthesis, and the data that are available. Much of this presentation will expand on this idea.
We will focus on an unsolved problem in meta-analysis by using models and software that allows us to optimize the tradeoff between executing many relatively small studies instead of a few larger studies. How does the shortfall in quality that is more likely to occur in smaller studies affect the precision and power? Formally, Vd is a function of Swithin which would increase in a relatively sloppy smaller study. t2, the variance of the distribution of effect-sizes in a random-effects model, is a fixed constant for a set of studies with some degree of commonality. But introducing small, less standardized studies should increase this parameter. These considerations can be quantified and explicitly included in the model (as well as the extra cost of supervising many smaller studies).