This material is published by Forest Ecology and Management (2017, 135: 246-258, doi: 10.1016/j.ecolecon.2017.01.019).


Use of meta-analysis in forest biodiversity research: key challenges and considerations

Rebecca Spake and C. Patrick Doncaster

Meta-analysis functions to increase the precision of empirical estimates and to broaden the scope of inference, making it a powerful tool for informing forest management and conservation actions around the world. Despite substantial advances in adapting meta-analytical techniques for use in ecological sciences from their foundations in medical and social sciences, forest biodiversity research still presents particular challenges to its application. These relate to the long timescales of successional stages, often precluding experimental designs, and the often-large spatial scales required to select random plots for sampling treatment factors of interest. Empirical studies measuring biodiversity responses to forest treatments vary widely in their quality with respect to the number of treatment replicates and the randomness of their allocation to treatment levels, with a high prevalence of pseudoreplicated designs. It has been suggested that meta-analysis can potentially offer a solution to the vast pseudoreplicated literature, because results from pseudoreplicated studies are formative collectively. Here we review the principal issues that arise when including differently designed studies in meta-analyses of forest biodiversity responses to forest management or disturbance, in addition to more general matters of appropriate question formulation and interpretation of synthetic findings. These concern the need for questions of practical value to forest management, appropriate effect size estimation and weighting of primary studies that differ in study design and quality. We recommend against using effect sizes that are standardized against within-study variance when pooling studies across different designs or across factors such as taxonomic group. We find a need for alternative weighting schemes to the conventional inverse of study variance, to account for variation between studies in their design quality as well as their observed precision. Finally, we recommend caution in interpreting results, particularly with regard to the possibility of systematic biases between reference and treatment stands.


Return to home page