Difficulties in benchmarking ecological null models: an assessment of current methods

Chai Molina, Lewi Stone

Research output: Contribution to journalArticlepeer-review


Identifying species interactions and detecting when ecological communities are structured by them is an important problem in ecology and biogeography. Ecologists have developed specialized statistical hypothesis tests to detect patterns indicative of community-wide processes in their field data. In this respect, null model approaches have proved particularly popular. The freedom allowed in choosing the null model and statistic to construct a hypothesis test leads to a proliferation of possible hypothesis tests from which ecologists can choose to detect these processes. Here, we point out some serious shortcomings of a popular approach to choosing the best hypothesis for the ecological problem at hand that involves benchmarking different hypothesis tests by assessing their performance on artificially constructed data sets. Terminological errors concerning the use of Type I and Type II errors that underlie these approaches are discussed. We argue that the key benchmarking methods proposed in the literature are not a sound guide for selecting null hypothesis tests, and further, that there is no simple way to benchmark null hypothesis tests. Surprisingly, the basic problems identified here do not appear to have been addressed previously, and these methods are still being used to develop and test new null models and summary statistics, from quantifying community structure (e.g., nestedness and modularity) to analyzing ecological networks.

Original languageEnglish
Article numbere02945
Issue number3
StatePublished - 1 Mar 2020


  • Type I error
  • Type II error
  • benchmarking
  • community structure
  • null models
  • power
  • robustness


Dive into the research topics of 'Difficulties in benchmarking ecological null models: an assessment of current methods'. Together they form a unique fingerprint.

Cite this