Assessing the Robustness of Meta-Analysis for the Fixed-Effect Model
Loading...
Date
Authors
Journal Title
Journal ISSN
Volume Title
Publisher
Abstract
The current meta-analysis methods for the fixed-effect model with continuous outcome variables have been developed based on the assumption that the variation of the outcome variable between patients within treatment groups for each study follows a normal distribution. However, real-world data does not always follow a normal distribution, which may lead to unreliable meta-analysis results.
This study uses the Monte Carlo simulation to evaluate robustness by comparing the analysis results with the truth when the normal assumption is violated; performance measures include the relative bias of the estimated treatment effect, the coverage probability of the estimates, and the power and type I error rate of the test of the null hypothesis. We simulate various non-normal outcome data, including a mixture of normals, lognormal, gamma, and χ^2 distributions. We examine the impact of the sample size per study, the number of studies, the magnitude of skewness, and the effect sizes on the results.
The results show that small studies with highly skewed data provide non-robust meta-analysis results for a fixed-effect model. Moreover, increasing the number of studies without sufficient sample sizes worsens the relative bias, coverage probability, and power. Therefore, this simulation suggests that investigators must be cautious when applying the fixed-effect model to small studies, particularly with respect to the potential non-normality of the data.
This study recommends that investigators include large trials whenever possible. If large trials are not feasible, they should always assess the normality of the datasets and select an appropriate meta-analysis method to obtain robust results. This will help ensure that policies and guidelines are based on reliable evidence, thereby minimizing the risk of implementing ineffective and harmful policies and guidelines.