Informing the Borrowing Process for Dose‐Finding Trials by Estimating the Similarity Between Population‐Specific Dose‐Toxicity Curves

  • Abstract
  • Literature Map
  • Similar Papers
Abstract
Translate article icon Translate Article Star icon
Take notes icon Take Notes

ABSTRACTThe conduct of dose‐finding trials can be specifically challenging in small populations, for example, in pediatric settings. Recently, research has shown that Bayesian borrowing from adult trials combined with appropriately robust prior distributions enables the conduct of pediatric dose‐finding trials with very small sample sizes. However, the appropriate degree of borrowing remains a subjective choice, relying on default methods or expert opinion. This paper proposes an approach to empirically determine the degree of borrowing based on a meta‐analysis of the similarity between population‐specific dose‐toxicity curves of other biologically similar compounds. Focusing on the pediatric use case, the approach may be applicable to any dose‐finding trial with information borrowing from another population. With the ExNex and a hierarchical model, two popular statistical modeling approaches are applied. The estimated degree of similarity is then translated into the statistical model for the dose‐finding algorithm using either variance inflation or robust mixture prior distributions. The performance of each combination of statistical model approaches is investigated in a simulation study. The results with mixture priors are promising for the application of the proposed methods, especially with many (20) compounds, while variance inflation models require additional fine‐tuning and seem to be less robust. With fewer (3 or 7) compounds, our proposed methods are either in line with robust priors that ignore the data from other compounds or are slightly better. We further provide a case study analyzing real dose‐finding data from 6 compounds with our models, demonstrating applicability in real‐world situations. For clinical trials teams, the decision for or against the proposed approach might be connected to the efforts in terms of time and cost to receive the external data.

Similar Papers
  • Conference Article
  • 10.71427/icaeed2025/1
Flood Frequency Analysis: Exploring the Role of Statistical Models in Engineering Education and Practice
  • Nov 23, 2024
  • Laura Rima + 1 more

Flood presents a significant risk to communities globally as flood damage is increasing due to climate change. To effectively reduce flood damage, scientists use a term 'design flood', which refers to a flood discharge linked to annual exceedance probability. This study investigates the role of several statistical models adopted in design flood estimation using flood and catchment data from 88 catchments in eastern Australia. The first objective of this study is to compare the effectiveness of two statistical modeling approaches, generalised additive models (GAM) and log-log regression in estimating flood quantiles. Generalised Extreme Value (GEV) distribution with L-moments was employed where the mean, the coefficient of variation and the coefficient of skewness were adopted as the response variables and the catchment characteristics as predictor variables. This study reveals that GAM performs better than log-log regression technique in capturing the variability of flood quantile estimates. The second objective of this study is to illustrate the learning aspects of the adopted statistical models. It is found that most of the students do not understand the fundamentals of these statistical modelling techniques and often reach inappropriate conclusions. The findings of this study will assist students and junior researchers to understand the assumptions related to statistical flood modelling approaches and how this affects decision making in sustainable flood plain management.

  • Abstract
  • 10.1182/blood-2018-99-114189
A Single Institution Comparison of Treatment Outcomes of Adolescent and Young Adult (AYA) Patients with Acute Myeloid Leukemia (AML) Treated in Pediatric and Adult Settings
  • Nov 29, 2018
  • Blood
  • Vaibhav Agrawal + 5 more

A Single Institution Comparison of Treatment Outcomes of Adolescent and Young Adult (AYA) Patients with Acute Myeloid Leukemia (AML) Treated in Pediatric and Adult Settings

  • Conference Article
  • 10.1109/isspa.1999.818092
New directions in automatic speech recognition: a communication perspective
  • Aug 22, 1999
  • B.S Atal

Summary form only given, as follows. The possibility of automatic speech recognition on computers has fuelled dreams for many years. Indeed, automatic speech recognition holds great promise if the spoken words can be recognised correctly by machines in fluent speech. We review the current state-of-the-art in automatic speech recognition and point to the new directions that research in this field must explore to continue this progress. Statistical modelling of speech using utterances spoken by many speakers in a variety of environments has been important in achieving the progress that has been realised so far. But on the negative side, the statistical modelling approach has led to solutions that are narrow and do not generalise. The statistical approach, although fruitful in the early development of the technology, is now a hindrance as we become much more ambitious in seeking solutions to bigger problems. We take a fresh look at the problem of automatic speech recognition, not as a problem in statistical modelling, but as a problem in voice communication where the goal is to recognise every spoken word correctly and discuss the fundamental underpinnings of this new approach.

  • Conference Article
  • 10.1063/1.5111277
The relationship between Rastrelliger kanagurta and its environmental parameters in exclusive economic zone (EEZ) of Malaysia using geographic information system (GIS) and statistical modeling approaches
  • Jan 1, 2019
  • Nur Azwin Razib + 1 more

Recent approaches in statistical modeling on fisheries research have focused on addressing the relationship between fish catch per unit effort (CPUE) data and remotely sensed oceanographic data. This study used multiple statistical modeling approaches to ensure a satisfied result in determining the most important parameter that influences the CPUE distribution of Rastrelliger kanagurta. Oceanographic data including water depth (WD), chlorophyll-a (CHL), sea surface temperature (SST), sea surface height (SSH) and surface wind (SW) are the environmental parameters which were extracted from 570 locations within EEZ of Malaysia by using GIS techniques. The combination of CPUE and oceanographic data was analyzed and then fitted simultaneously in each performance of Generalized Additive Model (GAM), Generalized Linear Model (GLM), Boosted Regression Tree (BRT) and Multivariate Adaptive Regression Splines (MARS). Each statistical modeling approach resulted difference ways to describe the relationship between fish and its environment. Results showed the non-linear behavior fish CPUE having the strongest overall relationship with SSH as SSH was always at the top. This indicated that SSH was the most important parameter affecting CPUE based on the highest percentage of relative influence and variable importance with great significant relationship (p<0.001) compared to other parameters. This study showed the capability of GIS and statistical modeling to understand the R. kanagurta distribution and its environment parameters in South China Sea.

  • Research Article
  • Cite Count Icon 10
  • 10.1063/5.0056662
Using statistical modeling to predict and understand fusion experiments
  • Dec 1, 2021
  • Physics of Plasmas
  • V Gopalaswamy + 18 more

Over 300 cryogenic layered direct-drive inertial confinement fusion implosions have been successfully executed on the OMEGA Laser System in the last decade. However, extracting sufficient understanding from these experiments to develop new designs or to identify or mitigate degradation sources remains challenging. Recently, a statistical modeling approach was developed to successfully design and predict improved implosion experiments on OMEGA. Here, we show that one-dimensional simulations can be used to predict the outcomes of systematically perturbed three-dimensional simulations and that this statistical modeling approach can be used to identify or rule out physical mechanisms for some of the degradation sources observed on the OMEGA Laser System for direct-drive cryogenic inertial confinement fusion. In this instance, we investigate the fusion yield dependencies on the ion temperature asymmetries and laser beam size observed in experiments by comparing with trends in three-dimensional synthetic simulation databases. Using the statistical model on these systematically perturbed simulations, we find that the statistically inferred dependency on the measured ion temperature asymmetries is well explained by imposed ℓ=1 modes. However, we find that the dependency on the laser beam size is only dominated by the illumination non-uniformity for some extreme cases.

  • Research Article
  • Cite Count Icon 8
  • 10.1007/s40808-020-00932-5
Investigating historical climatic impacts on wheat yield in India using a statistical modeling approach
  • Aug 11, 2020
  • Modeling Earth Systems and Environment
  • Anand Madhukar + 2 more

Wheat is an important food security crop supporting the livelihood of a large population across the world. Though climatic change has been affecting wheat yields globally, the detailed study on the historical climatic impacts on wheat yield in India is mostly missing. There are two approaches to assess the climatic impacts on crop yields: process-based models, and statistical models. The present manuscript investigates the historical climatic impacts on wheat yield in India using a statistical modeling approach. Fifty years of wheat yield and climate data (panel data comprising 175 Indian districts) were fitted in the six statistical models over the periods 1966–2015, 1966–75, 1976–85, 1986–95, 1996–05, and 2006–15. We found that (1) minimum and maximum temperatures have impacted wheat yield negatively in almost all the periods (except during 1966–75 and 1996–05 for maximum temperature). The estimated regression coefficients demonstrating the effect of minimum and maximum temperatures on wheat yield were − 43.74 kg/ha per °C (p < 0.001) and − 101.80 kg/ha per °C (p < 0.001) during 1966–15. (2) Precipitation and wet days did not impact wheat yield significantly during 1966–2015, but affected wheat yields negatively during 1996–05, and positively during 1976–85 and 1986–95. (3) Potential evapotranspiration and vapor pressure have impacted wheat yield negatively in almost all the periods (except during 1966–75 and 1996–05 for potential evapotranspiration). The estimated regression coefficients demonstrating the effect of potential evapotranspiration and vapor pressure on wheat yield were − 75.12 kg/ha per cm/day (p < 0.001) and − 49.98 kg/ha per hPa (p < 0.001) during 1966–2015. Our research findings highlight that temperatures, potential evapotranspiration, and vapor pressure have a more profound negative impact on wheat yield as compared to precipitation and wet days. This detailed analysis of historical climatic impacts on wheat yield is the first step towards achieving the bigger goal of identifying and recommending appropriate mitigation strategies. The results of this study are highly relevant for planners and policymakers in India and globally.

  • Research Article
  • 10.1002/mren.201500027
Mathematical Tools and Approaches for Polymerization Reaction Engineering II— Statistical Modeling Tools and Approaches
  • May 26, 2015
  • Macromolecular Reaction Engineering
  • José Carlos Pinto

This special issue of Macromolecular Reaction Engineering is dedicated to “Statistical Modeling Tools and Approaches for Polymerization Reaction Engineering” and is a continuation of the special issue published in 2014,1 dedicated to “Mathematical Tools and Approaches for Polymerization Reaction Engineering.” After finishing that first volume, we quickly realized that some important aspects of statistical tools frequently employed in the polymerization field had not been discussed in sufficient detail in the interesting papers submitted by our colleagues at that opportunity. As a consequence, even before publishing the first volume of this series, we decided to start the preparation of this second volume of this series. As in the previous case, I was very glad when Dr. Spiegel decided to support the preparation of this special issue of Macromolecular Reaction Engineering and invited me again to help him with the organization of this volume. As a matter of fact, I was captured and became an enthusiastic supporter of statistical methods and approaches in the 1980s, when I was a graduate student at COPPE/UFRJ and found out that statistical experimental design tools2-4 could allow for significant reduction of experimental costs and maximization of the information content of experimental data obtained with very hard work at research labs. Since then, I have been flirting with statistical methods for more than 30 years so that I could not avoid the temptation to get involved in this very interesting project. Statistical tools and approaches have been used in the polymerization field for many decades, as brilliant researchers, such as W. H. Carothers and J. P. Flory, realized that polymerization reaction mechanisms could not be entirely described by deterministic rules, as polymer materials can almost always be regarded as random mixtures of macromolecules that present distinct sizes, compositions, degrees of branching, and so on.5-7 Although reproducible, the final properties of most polymer materials prepared inside the usually very different reaction vessels and processes result in all cases from a complex network of random fundamental reaction steps that take place simultaneously, as a magnificent orchestra that emerges from the apparent chaos. In this scenario, it seems reasonable to assert that one of the most important roles of scientists and engineers who work in this particular field is the understanding and the manipulation (or designing) of the individual probabilities that control the occurrence of the respective reaction events in such complex mechanisms. Although statistical events are intrinsically connected to the hearts and souls of polymerization reaction mechanisms, the use of statistical tools and approaches in the polymer field can also be related to other applications. For example, the existence of complex reaction networks usually leads to large number of model parameters, which must be inferred from available data. Due to unavoidable existence of experimental uncertainties, model parameters must then be analyzed carefully with help of consistent statistical tools.8 Besides, as statistical analyses usually rely on large number of independent simulations, implementation of statistical procedures requires proper development and implementation of advanced computational schemes in order to provide useful results in the proper time.9 For this reason, the fast development of computer resources has also encouraged the continuous development of statistical applications in the polymer field. Based on the previous paragraphs, the papers published in the current volume consider applications of statistical modeling and tools in polymerization processes for different purposes and pursuing different objectives. For instance, the paper by Cui et al.10 discusses the estimation of model parameters from experimental data obtained during the production of poly(trimethylene ether glycol) from 1,3-propanediol. Statistical methods are required for formulation of the parameter estimation problem, determination of parameter significance, and interpretation of model adequacy, as usual in the polymer field. The papers by Scott et al.11 and Kazemi et al.12 are devoted to statistical design of experiments for optimal estimation of kinetic parameters in polymerization models. In the first case, D-optimum designs are proposed to allow for maximization of the information content of model parameters and for reduction of the model size. As shown by the authors, the use of statistical designs can lead to more precise model parameters and more compact model formulations. In the second case, the authors use the Fisher information matrix to design experiments for improved estimation of reactivity ratios in copolymerization problems, taking into account the intrinsic variability of measured variables. As shown by the authors, the proposed approach indicates that the experimental ranges that lead to optimum estimation of reactivity ratios are located in the vicinities of the corners of the terpolymerization composition triangular plot. The paper by Tobita13 makes use of Markovian chain approaches and Monte Carlo simulations in order to analyze the effects of chain branching and scission kinetics on the branched structure of low density polyethylenes. Particularly, it is shown that the effect of different kinetics on the polymer structure can be negligible when the final monomer conversion is smaller than 25%, normally used for the commercial production processes, but can be very significant if conversion becomes higher. Monte Carlo methods were also employed by Lemos et al.14 to analyze the effect of residence time distributions on the molecular weight distributions and chain composition distributions of copolymers produced in tubular reactors through controlled radical polymerization mechanisms. The proposed technique assumes that the analyzed system can be described as a set of batch reactors operated independently, whose volumes and batch times can be related to the discretized version of the residence time distribution. Pladis et al.15 proposed the use of a modified kinetic Monte Carlo method for analyses of the viscoelastic properties of low-density polyethylenes produced in high pressure tubular reactors. The proposed model uses an ensemble of branched polymer chains generated by the Monte Carlo procedure to describe the viscoelastic properties of the obtained product, which are compared to real experimental measurements of grades produced in industrial-scale reactors. As one can observe, Monte Carlo methods are so important for statistical modeling of polymerization processes and polymer properties that the field is reviewed thoroughly by Brandão et al.16 As discussed by the authors, the use of Monte Carlo methods will probably become even more important in the future, given the fast development of computer resources and the high flexibility provided by the method to describe different processes and materials. In spite of that, Kryven and Iedema17 show that deterministic procedures based on population balances can also provide competitive model structures, both in terms of computer costs and model responses and when compared to Monte Carlo methods, if appropriate description of the reaction mechanism and kinetics is provided and suitable numerical schemes are implemented. We sincerely hope the readers of Macromolecular Reaction Engineering will appreciate this second volume of papers concerned with modeling and numerical aspects of mathematical methods used in the polymerization field. We also hope this collection of papers will stimulate the increasing use of statistical methods in this field. Meanwhile, we will keep sending Stefan a steady flow of random perturbations in order to create the appropriate momentum for a third volume of this series, most likely on computational fluid dynamics aspects. José Carlos Pinto obtained his BSc in Chemical Engineering at Universidade Federal da Bahia (Salvador, Bahia, Brazil) in 1985, his MSc in Chemical Engineering at Universidade Federal do Rio de Janeiro (Rio de Janeiro, Rio de Janeiro, Brazil) in 1987, and his DSc in Chemical Engineering at Universidade Federal do Rio de Janeiro in 1991. At present, he holds a position of Professor Titular at Programa de Engenharia Química/COPPE, Universidade Federal do Rio de Janeiro, is Professor Permanente at Programa de Pós-Graduação em Química, Instituto Militar de Engenharia, and is a full-member of the Academia Brasileira de Ciências and Academia Nacional de Engenharia. José Carlos has worked in the general field of modeling, simulation, and control of polymerization processes since 1987, has published about 300 papers in refereed journals, and has deposited 30 patents. José Carlos has been the coordinator of more than 100 projects with industrial partners and has advised more than 100 MSc Dissertations and 50 DSc Theses.

  • Research Article
  • Cite Count Icon 70
  • 10.1016/j.fm.2014.04.008
Individual cell heterogeneity as variability source in population dynamics of microbial inactivation
  • Apr 30, 2014
  • Food Microbiology
  • Zafiro Aspridou + 1 more

Individual cell heterogeneity as variability source in population dynamics of microbial inactivation

  • Research Article
  • Cite Count Icon 42
  • 10.1016/j.dendro.2010.09.002
Statistical modelling and RCS detrending methods provide similar estimates of long-term trend in radial growth of common beech in north-eastern France
  • Jan 1, 2011
  • Dendrochronologia
  • Jean-Daniel Bontemps + 1 more

Statistical modelling and RCS detrending methods provide similar estimates of long-term trend in radial growth of common beech in north-eastern France

  • Research Article
  • Cite Count Icon 6
  • 10.14214/df.7
Impacts of climate change on forest growth: a modelling approach with application to management
  • Jan 1, 2005
  • Dissertationes Forestales
  • Juho Matala

The aim of this thesis was to modify and apply a statistical growth and yield model for analysing forest resources and optimal management under a changing climate in Finland. Initially, the structural and functional properties of physiological and statistical growth and yield models were compared under the current climate to assess whether the physiological model could be utilised in the modification of the statistical model (I). Thereafter, the impacts of elevated temperature and CO2 on tree growth were introduced into a statistical growth and yield model with species-specific transfer functions, which were formulated based on data simulated with a physiological model (II-III). These functions were created separately for three main tree species and they described the increase in stem volume growth of trees as a function of elevated temperature and CO2, stand density, competition status of a tree in a stand, geographical location and site fertility type of a stand. This method allowed the internal dynamics of the statistical model to be followed when the impacts of climate change were applied to the volume growth, allocated between diameter and height growth. Finally, this methodology was applied to derive an optimal management solution for a forest region located in eastern Finland under a changing climate by using the large-scale forestry scenario model and National Forest Inventory sample plot data (IV). In model comparisons, it was found that the physiological and statistical models agreed well in terms of relative growth rates regardless of tree species (I). This implies that both models predicted in a similar way the competition within a stand and the effect of position on tree growth. However, the statistical model was less sensitive to initial stand conditions and management than the physiological model. The transfer functions worked reasonably well in the statistical model and the model predictions were logical as regards the differences in productivity between species, sites and locations under current and changing climate (II, III). In these simulations, the volume growth was enhanced less in southern than in northern Finland, where currently low summer temperatures are more limiting to growth. In a regional forestry scenario analyses (IV), the accelerating tree growth under a changing climate increased the maximum sustainable removal of timber at regional level. Changes in optimal forest management were also detected: the proportion of thinnings increased because the stands fulfilled thinning requirements earlier, and the optimisation allocated more cuttings on mineral soils where extraction of wood was cheaper than on peatlands. Altogether, this study presents an attempt to integrate the capabilities of physiological and statistical growth and yield modelling approaches in order to make the latter more responsive to changing environmental conditions. As a result, the statistical model system can be expected to provide more precise predictions for a regional forestry scenario analyses by solving endogenously optimal forest management under a changing climate in

  • Research Article
  • Cite Count Icon 334
  • 10.1109/tmc.2002.1011059
A statistical modeling approach to location estimation
  • Jan 1, 2002
  • IEEE Transactions on Mobile Computing
  • T Roos + 2 more

Some location estimation methods, such as the GPS satellite navigation system, require nonstandard features either in the mobile terminal or the network. Solutions based on generic technologies not intended for location estimation purposes, such as the cell-ID method in GSM/GPRS cellular networks, are usually problematic due to their inadequate location estimation accuracy. In order to enable accurate location estimation when only inaccurate measurements are available, we present an approach to location estimation that is different from the prevailing geometric one. We call our approach the statistical modeling approach. As an example application of the proposed statistical modeling framework, we present a location estimation method based on a statistical signal power model. We also present encouraging empirical results from simulated experiments supported by real-world field tests.

  • Research Article
  • 10.4037/aacnacc2021949
Ethical Issues in the Care of Emerging Adults in Pediatric Intensive Care Units.
  • Jun 15, 2021
  • AACN advanced critical care
  • Mary Brennan + 1 more

Ethical Issues in the Care of Emerging Adults in Pediatric Intensive Care Units.

  • PDF Download Icon
  • Research Article
  • 10.29352/mill0201.03.00049
Physiological dynamics of heart rate variability: a statistical modeling approach in vasovagal syncope
  • Jan 1, 2016
  • Millenium - Journal of Education, Technologies, and Health
  • Maria Seco + 1 more

Introduction: The transitory loss of conscience and postural tone followed by rapid recovery is defined as syncope. Recently has been given attention to a central mediated syncope with drop of systemic pressure, a condition known as vasovagal syncope (VVS).Objectives: The analysis of Heart Rate Variability (HRV) is one of the main strategies to study VVS during standard protocols (e.g. Tilt Test). The main objective in this work is to understand the relative power of several physiological variables - Diastolic and Systolic Blood Pressure, (dBP) and (sBP), Stroke Volume (SV) and Total Peripheral Resistance (TPR) in Heart Rate Variability (HRV) signal.Methods: Statistical mixed models were used to model the behavior of the above variables in HRV. Data with more than one thousand and five hundred observations from four patients with VVS were used and previously tested with classical spectral analysis for basal (LF/HF=3.01) and tilt phases (LF/HF=0.64), indicating a vagal predominance in the tilt period.Results: Statistical models reveal, in Model 1, a major role in dBP and a low influence from SV, in the tilt phase, concerning HRV. In Model 2 TPR disclose a low HRV influence in the tilt phase among VVS patients.Conclusions: HRV is influenced by a set of physiological variables, whose individual contribution can be assessed to understand heart rate fluctuations. In this work, the use of statistical models put forward the importance of studying the role of dBP and SV in VVS.

  • Research Article
  • Cite Count Icon 29
  • 10.1080/17477160801897067
The Swedish cost burden of overweight and obesity – evaluated with the PAR approach and a statistical modelling approach
  • Jan 1, 2008
  • International Journal of Pediatric Obesity
  • Knut Odegaard + 3 more

The rising trend in the prevalence of obesity, which is a major risk factor for a number of diseases notably diabetes and cardiovascular diseases, has become a major public health concern in many countries during the past decades. This development has also led to an increased cost burden on the public health care delivery system that has been documented in many studies. The standard approach taken for estimating the cost burden attributed to a risk factor is the so-called PAR (Population Attributed Risk) approach; an approach that is based on cross-sectional data. In this paper, the methods and findings of two studies that have documented the cost burden attributed to overweight and obesity on the public health care delivery system in Sweden are contrasted: one using the PAR approach and one using a statistical modeling approach based on longitudinal hospital care data for 15 years for 33 000 individuals. The main motivation for this paper is that the study using the PAR approach is only available in the Swedish language. The PAR approach estimated a cost burden of 3 600 million SEK (390 million Euro), equavalent to 1.9% of national health care expenditure, out of which 1 800 million SEK (190 million Euro) were spent on hospital care. The statistical modeling approach estimated the corresponding cost burden for hospital care at 2 100 million SEK (230 million Euro). The statistical modeling approach presents no estimates of the total cost burden attributed to overweight and obesity.

  • Research Article
  • Cite Count Icon 23
  • 10.1111/1365-2664.13468
Protecting juveniles, spawners or both: A practical statistical modelling approach for the design of marine protected areas
  • Jul 18, 2019
  • Journal of Applied Ecology
  • Arnaud Grüss + 3 more

Fish populations undertaking ontogenetic or spawning migrations pose challenges to marine protected area (MPA) planning because of the large extent of their distribution areas. There is a need to identify the juvenile and spawner hotspots of these populations that could be set aside as MPAs. Species distribution models making comprehensive use of available monitoring data and predicting the realized juvenile and spawner hotspots of migratory fish populations will assist resource managers with MPA planning. We developed a statistical modelling approach relying on multiple, regional monitoring datasets for assisting spatial protection efforts targeting the juveniles, spawners, or both life stages, of migratory fish species and species complexes. This approach predicts juvenile and spawner hotspot indices, and critical life stage (CLS) hotspot indices, which integrate both juvenile and spawner hotspot indices. We applied the approach to 11 vulnerable species of the grouper–snapper complex of the U.S. Gulf of Mexico, which all form fish spawning aggregations (FSAs). The CLS hotspot index was predicted to be highest in the Pulley Ridge and Flower Garden Banks areas, followed by the West Florida Shelf, southwestern Florida waters and portions of the Louisiana‐Mississippi‐Alabama shelf. The Pulley Ridge Habitat Area of Particular Concern and Flower Garden Banks National Marine Sanctuary are two important existing MPAs of the U.S. Gulf of Mexico, whose possible expansion is being considered. The predicted CLS hotspot indices suggest that expanding these MPAs or increasing harvest regulations within them would offer substantial protection to both the juveniles and spawners of many FSA‐forming species of the grouper–snapper complex. Synthesis and applications. As the number of marine protected areas (MPAs) continues to increase worldwide, statistical modelling approaches making comprehensive use of available data are urgently needed to support resource managers’ abilities to establish sound and efficient spatial protection plans. The outputs of our statistical models can serve as inputs to conservation planning software packages seeking optimal marine protected areas configurations or can be directly employed by resource managers for formulating spatial protection plans.

Save Icon
Up Arrow
Open/Close
  • Ask R Discovery Star icon
  • Chat PDF Star icon

AI summaries and top papers from 250M+ research sources.