Abstract

Conventional practices in species distribution modeling lack predictive power when the spatial structure of data is not taken into account. However, choosing a modeling approach that accounts for overfitting during model training can improve predictive performance on spatially separated test data, leading to more reliable models. This study introduces spatialMaxent (https://github.com/envima/spatialMaxent), a software that combines state-of-the-art spatial modeling techniques with the popular species distribution modeling software Maxent. It includes forward-variable-selection, forward-feature-selection, and regularization-multiplier tuning based on spatial cross-validation, which enables addressing overfitting during model training by considering the impact of spatial dependency in the training data. We assessed the performance of spatialMaxent using the National Center for Ecological Analysis and Synthesis dataset, which contains over 200 anonymized species across six regions worldwide. Our results show that spatialMaxent outperforms both conventional Maxent and models optimized according to literature recommendations without using a spatial tuning strategy in 80 percent of the cases. spatialMaxent is user-friendly and easily accessible to researchers, government authorities, and conservation practitioners. Therefore, it has the potential to play an important role in addressing pressing challenges of biodiversity conservation.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.