An Evaluation of the Replicable Factor Analytic Solutions Algorithm for Variable Selection: A Simulation Study.

  • Abstract
  • Literature Map
  • Similar Papers
Abstract
Translate article icon Translate Article Star icon
Take notes icon Take Notes

Observed variable and factor selection are critical components of factor analysis, particularly when the optimal subset of observed variables and the number of factors are unknown and results cannot be replicated across studies. The Replicable Factor Analytic Solutions (RFAS) algorithm was developed to assess the replicability of factor structures-both in terms of the number of factors and the variables retained-while identifying the "best" or most replicable solutions according to predefined criteria. This study evaluated RFAS performance across 54 experimental conditions that varied in model complexity (six-factor models), interfactor correlations (ρ = 0, .30, and .60), and sample sizes (n = 300, 500, and 1000). Under default settings, RFAS generally performed well and demonstrated its utility in producing replicable factor structures. However, performance declined with highly correlated factors, smaller sample sizes, and more complex models. RFAS was also compared to four alternative variable selection methods: Ant Colony Optimization (ACO), Weighted Group Least Absolute Shrinkage and Selection Operator (LASSO), and stepwise procedures based on target Tucker-Lewis Index (TLI) and ΔTLI criteria. Stepwise and LASSO methods were largely ineffective at eliminating problematic variables under the studied conditions. In contrast, both RFAS and ACO successfully removed variables as intended, although the resulting factor structures often differed substantially between the two approaches. As with other variable selection methods, refining algorithmic criteria may be necessary to further enhance model performance.

Similar Papers
  • Research Article
  • Cite Count Icon 5
  • 10.1088/1755-1315/187/1/012044
A Combined Modeling of Generalized Linear Mixed Model and LASSO Techniques for Analizing Monthly Rainfall Data
  • Nov 1, 2018
  • IOP Conference Series: Earth and Environmental Science
  • A Muslim + 3 more

The rainfall pattern is always interesting to be investigated. This paper discusses the performance of three methods in modeling the rainfall data namely the LASSO (Least Absolute Shrinkage and Selection Operator) and GLMM (Generalized Linear Mixed Model) methods as well as a combination of GLMM and LASSO techniques. The rainfall data is usually collected on a regular basis, hence it is longitudinal data. The GLMM methods are usually employed to analyze longitudinal data, especially when number of explanatory variables is small. If the number of explanatory variables is large and if these variables are correlated then the GLMM estimation will be suffered by ill condition problems. These problems may be overcome by adding L1 penalty and start doing variable selection and shrinkage simultaneously. In this paper a combination of GLMM and LASSO techniques is evaluated by using monthly rainfall data, a high dimensional data, collected during 1981-2014 in Indramayu sub-district. The results showed that a combination of GLMM and LASSO methods is superior when compared with GLMM and LASSO methods separately. This claim is supported by evidence that MSE of the combined method is smaller than MSEs of the other two methods for various λ (lambda).

  • PDF Download Icon
  • Research Article
  • Cite Count Icon 16
  • 10.1186/s41512-018-0043-4
Combined performance of screening and variable selection methods in ultra-high dimensional data in predicting time-to-event outcomes
  • Sep 26, 2018
  • Diagnostic and Prognostic Research
  • Lira Pi + 1 more

BackgroundBuilding prognostic models of clinical outcomes is an increasingly important research task and will remain a vital area in genomic medicine. Prognostic models of clinical outcomes are usually built and validated utilizing variable selection methods and machine learning tools. The challenges, however, in ultra-high dimensional space are not only to reduce the dimensionality of the data, but also to retain the important variables which predict the outcome. Screening approaches, such as the sure independence screening (SIS), iterative SIS (ISIS), and principled SIS (PSIS), have been developed to overcome the challenge of high dimensionality. We are interested in identifying important single-nucleotide polymorphisms (SNPs) and integrating them into a validated prognostic model of overall survival in patients with metastatic prostate cancer. While the abovementioned variable selection approaches have theoretical justification in selecting SNPs, the comparison and the performance of these combined methods in predicting time-to-event outcomes have not been previously studied in ultra-high dimensional space with hundreds of thousands of variables.MethodsWe conducted a series of simulations to compare the performance of different combinations of variable selection approaches and classification trees, such as the least absolute shrinkage and selection operator (LASSO), adaptive least absolute shrinkage and selection operator (ALASSO), and random survival forest (RSF), in ultra-high dimensional setting data for the purpose of developing prognostic models for a time-to-event outcome that is subject to censoring. The variable selection methods were evaluated for discrimination (Harrell’s concordance statistic), calibration, and overall performance. In addition, we applied these approaches to 498,081 SNPs from 623 Caucasian patients with prostate cancer.ResultsWhen n = 300, ISIS-LASSO and ISIS-ALASSO chose all the informative variables which resulted in the highest Harrell’s c-index (> 0.80). On the other hand, with a small sample size (n = 150), ALASSO performed better than any other combinations as demonstrated by the highest c-index and/or overall performance, although there was evidence of overfitting. In analyzing the prostate cancer data, ISIS-ALASSO, SIS-LASSO, and SIS-ALASSO combinations achieved the highest discrimination with c-index of 0.67.ConclusionsChoosing the appropriate variable selection method for training a model is a critical step in developing a robust prognostic model. Based on the simulation studies, the effective use of ALASSO or a combination of methods, such as ISIS-LASSO and ISIS-ALASSO, allows both for the development of prognostic models with high predictive accuracy and a low risk of overfitting assuming moderate sample sizes.

  • Research Article
  • Cite Count Icon 17
  • 10.1016/j.chemolab.2018.11.015
A variable informative criterion based on weighted voting strategy combined with LASSO for variable selection in multivariate calibration
  • Dec 7, 2018
  • Chemometrics and Intelligent Laboratory Systems
  • Ruoqiu Zhang + 8 more

A variable informative criterion based on weighted voting strategy combined with LASSO for variable selection in multivariate calibration

  • PDF Download Icon
  • Research Article
  • Cite Count Icon 1
  • 10.20956/j.v20i2.31632
Comparative Analysis of Ridge, LASSO, and Elastic Net Regularization Approaches in Handling Multicollinearity for Infant Mortality Data in South Sulawesi
  • Dec 24, 2023
  • Jurnal Matematika, Statistika dan Komputasi
  • Arief Rahman Nur + 2 more

Infant mortality rate is a crucial indicator for assessing the health and infant care quality in a region. In the effort to reduce infant mortality rates, regression analysis serves as a tool to identify influential factors. However, regression analysis often encounters the challenge of multicollinearity, which involves high correlation among predictor variables. To address this issue, various regularization techniques can be applied, such as ridge regression, least absolute shrinkage and selection operator (LASSO), and elastic net. Ridge regression aims to control coefficient variance, while LASSO directs some coefficients to zero, functioning as variable selection. Elastic net combines the strengths of both methods by merging ridge and LASSO regularization. The objective of this research is to evaluate the performance of ridge regression, elastic net, and LASSO methods in handling multicollinearity issues, utilizing infant mortality rate data in South Sulawesi Province. The results indicate that the elastic net method outperforms both Ridge and LASSO methods. The best-performing model is obtained through elastic net with a coefficient of determination value of 60.81%, whereas ridge and LASSO methods yield coefficient of determination values of 54.11% and 58.18%, respectively. This demonstrates that the application of the elastic net method is capable of producing more accurate results in modeling the variables within the analysis of infant mortality rate data compared to other methods.

  • Research Article
  • 10.47352/jmans.2774-3047.251
Performance of Ridge Regression, Least Absolute Shrinkage and Selection Operator, and Elastic Net in Overcoming Multicollinearity
  • Feb 23, 2025
  • Journal of Multidisciplinary Applied Natural Science
  • Dewi Retno Sari Saputro + 2 more

Multicollinearity is a violation of assumptions in multiple linear regression analysis that can occur if there is a high correlation between the independent variables. Likewise, the variants of multiple linear regression models such as the Geographically Weighted Regression model (GWR). Multicollinearity causes parameter estimation using the Quadratic Method (QM) unstable and produces a large variance. On the other hand, what is expected in the estimation parameters is an estimate with a minimum variance, even though it is biased. Thus, one way to overcome multicollinearity can be to use biased estimators, such as Ridge Regression (RR), Least Absolute Shrinkage and Selection Operator (LASSO), and Elastic Net (EN). In RR, the Least Square Method (LSM) coefficient is reduced to zero but it can’t select the independent variable. However, the parameter model obtained from the Ridge Regression is biased, and the variance of the resulting regression coefficients is relatively tiny. In addition, the RR is increasingly difficult to understand if a huge number of independent variables are used. Meanwhile, LASSO is a computational method that uses quadratic programming and can act out the RR principles and perform variable selection. The LASSO method became known after discovering the Least-Angle Regression (LARS) algorithm. The LASSO method can reduce the LSM coefficient to zero to perform variable selection. LASSO also has a weakness, so EN is used. In this article, the performance of the three methods is compared from the mathematical aspect. The performance of each is written as follows, RR is helpful for clustering effects, where collinear features can be selected together; LASSO is proper for feature selection when the dataset has features with poor predictive power and EN combines LASSO and RR, which has the potential to lead to simple and predictive models.

  • Research Article
  • Cite Count Icon 28
  • 10.1002/sim.6257
Variable selection in subdistribution hazard frailty models with competing risks data.
  • Jul 10, 2014
  • Statistics in Medicine
  • Il Do Ha + 5 more

The proportional subdistribution hazards model (i.e. Fine-Gray model) has been widely used for analyzing univariate competing risks data. Recently, this model has been extended to clustered competing risks data via frailty. To the best of our knowledge, however, there has been no literature on variable selection method for such competing risks frailty models. In this paper, we propose a simple but unified procedure via a penalized h-likelihood (HL) for variable selection of fixed effects in a general class of subdistribution hazard frailty models, in which random effects may be shared or correlated. We consider three penalty functions, least absolute shrinkage and selection operator (LASSO), smoothly clipped absolute deviation (SCAD) and HL, in our variable selection procedure. We show that the proposed method can be easily implemented using a slight modification to existing h-likelihood estimation approaches. Numerical studies demonstrate that the proposed procedure using the HL penalty performs well, providing a higher probability of choosing the true model than LASSO and SCAD methods without losing prediction accuracy. The usefulness of the new method is illustrated using two actual datasets from multi-center clinical trials.

  • Research Article
  • Cite Count Icon 11
  • 10.1049/cje.2015.10.025
Variable Selection in Logistic Regression Model
  • Oct 1, 2015
  • Chinese Journal of Electronics
  • Shangli Zhang + 4 more

Variable selection is one of the most important problems in pattern recognition. In linear regression model, there are many methods can solve this problem, such as Least absolute shrinkage and selection operator (LASSO) and many improved LASSO methods, but there are few variable selection methods in generalized linear models. We study the variable selection problem in logistic regression model. We propose a new variable selection method-the logistic elastic net, prove that it has grouping effect which means that the strongly correlated predictors tend to be in or out of the model together. The logistic elastic net is particularly useful when the number of predictors (p) is much bigger than the number of observations (n). By contrast, the LASSO is not a very satisfactory variable selection method in the case when p is more larger than n. The advantage and effectiveness of this method are demonstrated by real leukemia data and a simulation study.

  • PDF Download Icon
  • Research Article
  • Cite Count Icon 9
  • 10.3390/f13050787
Comparison of Variable Selection Methods among Dominant Tree Species in Different Regions on Forest Stock Volume Estimation
  • May 18, 2022
  • Forests
  • Gengsheng Fang + 3 more

The forest stock volume (FSV) is one of the crucial indicators to reflect the quality of forest resources. Variable selection methods are usually used for FSV estimated models. However, few studies have explored which variable selection methods can make the selected data set have better explanatory and robustness for the same dominant tree species in different regions after the feature variables were filtered by the feature selection methods. In this study, we chose six dominant tree species from Lin’an District, Anji County, and a part of Longquan City. The tree species include broad-leaved, coniferous, Masson pine, Chinese fir, coniferous and broad-leaved mixed forest, and all tree species which include the above five groups of tree species. The last two tree species were represented by mixed and all, respectively. Then, the satellite images, terrain factors, and forest inventory data were selected by six variable selection methods (least absolute shrinkage and selection operator (LASSO), recursive feature elimination (RFE), stepwise regression (Step-Reg), permutation importance (PI), mean decrease impurity (MDI), and SelectFromModel based on LightGBM (SFM)), according to different dominant tree types in different regions. The selected variables were formed into a new dataset divided by different dominant trees. Besides, extreme gradient boosting (XGBoost) was used, combined with variable selection methods to estimate the FSV. The performed results are as follows: In the feature selection of coniferous, RFE performed better both in the average and in the separate regions. In the feature selection of Chinese fir and all, PI performed better both in the average and in the separate regions. In the feature selection of Masson pine, MDI performed better both in the average and in the separate regions. In the feature selection of mixed, MDI performed better in the average while RFE performed better in the separate regions comprehensively. The results showed that not only in separate regions, but the average result two factors, RFE, MDI, and PI all performed well to select variables to estimate the FSV. Furthermore, we selected the top five high feature-importance factors of different tree types, and the results showed that tree age and canopy density were both of great importance to the estimation of FSV. Besides, in the exhibited results of feature selection methods, compared with no variable selection, the research also found that variable selection can improve the performance of the model. Additionally, from the results of different tree types in different regions, we also found that small-scale and diversity of dominant tree types may lead to the instability and unreliability of experimental results. The study provides some insight into the application the optimal variable selection methods of the same dominant tree type in different regions. This study will help the development of variable selection methods to estimate FSV.

  • Research Article
  • Cite Count Icon 42
  • 10.1021/ci500715e
Recursive Random Forests Enable Better Predictive Performance and Model Interpretation than Variable Selection by LASSO.
  • Mar 16, 2015
  • Journal of Chemical Information and Modeling
  • Xiang-Wei Zhu + 2 more

Variable selection is of crucial significance in QSAR modeling since it increases the model predictive ability and reduces noise. The selection of the right variables is far more complicated than the development of predictive models. In this study, eight continuous and categorical data sets were employed to explore the applicability of two distinct variable selection methods random forests (RF) and least absolute shrinkage and selection operator (LASSO). Variable selection was performed: (1) by using recursive random forests to rule out a quarter of the least important descriptors at each iteration and (2) by using LASSO modeling with 10-fold inner cross-validation to tune its penalty λ for each data set. Along with regular statistical parameters of model performance, we proposed the highest pairwise correlation rate, average pairwise Pearson's correlation coefficient, and Tanimoto coefficient to evaluate the optimal by RF and LASSO in an extensive way. Results showed that variable selection could allow a tremendous reduction of noisy descriptors (at most 96% with RF method in this study) and apparently enhance model's predictive performance as well. Furthermore, random forests showed property of gathering important predictors without restricting their pairwise correlation, which is contrary to LASSO. The mutual exclusion of highly correlated variables in LASSO modeling tends to skip important variables that are highly related to response endpoints and thus undermine the model's predictive performance. The optimal variables selected by RF share low similarity with those by LASSO (e.g., the Tanimoto coefficients were smaller than 0.20 in seven out of eight data sets). We found that the differences between RF and LASSO predictive performances mainly resulted from the variables selected by different strategies rather than the learning algorithms. Our study showed that the right selection of variables is more important than the learning algorithm for modeling. We hope that a standard procedure could be developed based on these proposed statistical metrics to select the truly important variables for model interpretation, as well as for further use to facilitate drug discovery and environmental toxicity assessment.

  • Conference Article
  • Cite Count Icon 4
  • 10.1109/iww-bci.2014.6782565
Channel selection for simultaneous myoelectric prosthesis control
  • Feb 1, 2014
  • Han-Jeong Hwang + 2 more

To develop a clinically available prosthesis based on electromyography (EMG) signals, the number of recording electrodes should be as small as possible. In this study, we investigate the possibility of the least absolute shrinkage and selection operator (LASSO) for finding electrode subsets suitable for regression based myoelectric prosthesis control. EMG signals were recorded using 192 electrodes while ten subjects were performing two degree-of-freedom (DoF) wrist movements. Among the whole channels, we selected subsets consisting of 96, 64, 48, 32, 24, 16, 12, and 8 electrodes, respectively, using the LASSO method. As a baseline method, electrode subsets having the same numbers of electrodes were arbitrary selected with regular spacing (uniform selection method). The performance of decoding the movements was estimated using the r-square value. The electrode subsets selected by the LASSO method generally outperformed those chosen by the arbitrary selection method. In particular, the performance of the LASSO method was significantly higher than that of the arbitrary selection method when using the subsets of 8 electrodes. From the analysis results, we could confirm that the LASSO method can be used to select reasonable electrode subsets for regression based myoelectric prosthesis control.

  • PDF Download Icon
  • Research Article
  • Cite Count Icon 1
  • 10.1186/s12874-023-02121-1
Developing non-response weights to account for attrition-related bias in a longitudinal pregnancy cohort
  • Dec 14, 2023
  • BMC Medical Research Methodology
  • Tona M Pitt + 6 more

BackgroundProspective cohorts may be vulnerable to bias due to attrition. Inverse probability weights have been proposed as a method to help mitigate this bias. The current study used the “All Our Families” longitudinal pregnancy cohort of 3351 maternal-infant pairs and aimed to develop inverse probability weights using logistic regression models to predict study continuation versus drop-out from baseline to the three-year data collection wave.MethodsTwo methods of variable selection took place. One method was a knowledge-based a priori variable selection approach, while the second used Least Absolute Shrinkage and Selection Operator (LASSO). The ability of each model to predict continuing participation through discrimination and calibration for both approaches were evaluated by examining area under the receiver operating curve (AUROC) and calibration plots, respectively. Stabilized inverse probability weights were generated using predicted probabilities. Weight performance was assessed using standardized differences of baseline characteristics for those who continue in study and those that do not, with and without weights (unadjusted estimates).ResultsThe a priori and LASSO variable selection method prediction models had good and fair discrimination with AUROC of 0.69 (95% Confidence Interval [CI]: 0.67–0.71) and 0.73 (95% CI: 0.71–0.75), respectively. Calibration plots and non-significant Hosmer-Lemeshow Goodness of Fit Tests indicated that both the a priori (p = 0.329) and LASSO model (p = 0.242) were well-calibrated. Unweighted results indicated large (> 10%) standardized differences in 15 demographic variables (range: 11 − 29%), when comparing those who continued in the study with those that did not. Weights derived from the a priori and LASSO models reduced standardized differences relative to unadjusted estimates, with the largest differences of 13% and 5%, respectively. Additionally, when applying the same LASSO variable selection method to develop weights in future data collection waves, standardized differences remained below 10% for each demographic variable.ConclusionThe LASSO variable selection approach produced robust weights that addressed non-response bias more than the knowledge-driven approach. These weights can be applied to analyses across multiple longitudinal waves of data collection to reduce bias.

  • Research Article
  • Cite Count Icon 50
  • 10.1002/sim.5937
A comparative study of variable selection methods in the context of developing psychiatric screening instruments
  • Aug 11, 2013
  • Statistics in Medicine
  • Feihan Lu + 1 more

The development of screening instruments for psychiatric disorders involves item selection from a pool of items in existing questionnaires assessing clinical and behavioral phenotypes. A screening instrument should consist of only a few items and have good accuracy in classifying cases and non-cases. Variable/item selection methods such as Least Absolute Shrinkage and Selection Operator (LASSO), Elastic Net, Classification and Regression Tree, Random Forest, and the two-sample t-test can be used in such context. Unlike situations where variable selection methods are most commonly applied (e.g., ultra high-dimensional genetic or imaging data), psychiatric data usually have lower dimensions and are characterized by the following factors: correlations and possible interactions among predictors, unobservability of important variables (i.e., true variables not measured by available questionnaires), amount and pattern of missing values in the predictors, and prevalence of cases in the training data. We investigate how these factors affect the performance of several variable selection methods and compare them with respect to selection performance and prediction error rate via simulations. Our results demonstrated that: (1) for complete data, LASSO and Elastic Net outperformed other methods with respect to variable selection and future data prediction, and (2) for certain types of incomplete data, Random Forest induced bias in imputation, leading to incorrect ranking of variable importance. We propose the Imputed-LASSO combining Random Forest imputation and LASSO; this approach offsets the bias in Random Forest and offers a simple yet efficient item selection approach for missing data. As an illustration, we apply the methods to items from the standard Autism Diagnostic Interview-Revised version.

  • Research Article
  • Cite Count Icon 121
  • 10.1534/genetics.113.150078
Genome-Wide Prediction of Traits with Different Genetic Architecture Through Efficient Variable Selection
  • Oct 1, 2013
  • Genetics
  • Valentin Wimmer + 5 more

In genome-based prediction there is considerable uncertainty about the statistical model and method required to maximize prediction accuracy. For traits influenced by a small number of quantitative trait loci (QTL), predictions are expected to benefit from methods performing variable selection [e.g., BayesB or the least absolute shrinkage and selection operator (LASSO)] compared to methods distributing effects across the genome [ridge regression best linear unbiased prediction (RR-BLUP)]. We investigate the assumptions underlying successful variable selection by combining computer simulations with large-scale experimental data sets from rice (Oryza sativa L.), wheat (Triticum aestivum L.), and Arabidopsis thaliana (L.). We demonstrate that variable selection can be successful when the number of phenotyped individuals is much larger than the number of causal mutations contributing to the trait. We show that the sample size required for efficient variable selection increases dramatically with decreasing trait heritabilities and increasing extent of linkage disequilibrium (LD). We contrast and discuss contradictory results from simulation and experimental studies with respect to superiority of variable selection methods over RR-BLUP. Our results demonstrate that due to long-range LD, medium heritabilities, and small sample sizes, superiority of variable selection methods cannot be expected in plant breeding populations even for traits like FRIGIDA gene expression in Arabidopsis and flowering time in rice, assumed to be influenced by a few major QTL. We extend our conclusions to the analysis of whole-genome sequence data and infer upper bounds for the number of causal mutations which can be identified by LASSO. Our results have major impact on the choice of statistical method needed to make credible inferences about genetic architecture and prediction accuracy of complex traits.

  • Research Article
  • Cite Count Icon 9
  • 10.1080/00952990.2019.1648484
The clinical consequences of variable selection in multiple regression models: a case study of the Norwegian Opioid Maintenance Treatment program
  • Oct 11, 2019
  • The American Journal of Drug and Alcohol Abuse
  • Marianne Riksheim Stavseth + 2 more

Background: Selecting which variables to include in multiple regression models is a pervasive problem in medical research. Objectives: Based on questionnaire data (n = 18538, 69.9% men) from the Norwegian Opioid Maintenance Treatment Program, this study aims to compare the performance of different variable selection methods and the potential clinical consequences of choice of method. The effect of missing data is also explored. Methods: The dependent variable was engagement in criminal behavior while in treatment. Twenty-nine potential covariates on demographics, psychosocial factors and drug use were tested for inclusion in a multiple logistic regression model. Both complete case and multiply imputed data were considered. We compared the results from variable selection methods ranging from expert-based and purposeful variable selection, through stepwise methods, to more recently developed penalized regression using the Least Absolute Shrinkage and Selection Operator (LASSO). Results: The various variable selection methods resulted in regression models including from 9 to 22 covariates. The stepwise selection procedures generated the models with the most covariates included. The choice of variable selection method directly affected the estimated regression coefficients, both in effect size and statistical significance. For several variables the expert-based approach disagreed with all data-driven methods. Conclusions: The choice of variable selection method may strongly affect the resulting regression model, along with accompanying effect sizes and confidence intervals. This may affect clinical conclusions. The process should consequently be given sufficient consideration in model building. We recommend combining expert knowledge with a data-driven variable selection method to explore the models’ robustness.

  • PDF Download Icon
  • Research Article
  • Cite Count Icon 1
  • 10.35596/1729-7648-2023-21-4-110-117
Combined Method for Informative Feature Selection for Speech Pathology Detection
  • Aug 29, 2023
  • Doklady BGUIR
  • D. S. Likhachov + 3 more

The task of detecting vocal abnormalities is characterized by a small amount of available data for training, as a consequence of which classification systems that use low-dimensional data are the most relevant. We propose to use LASSO (least absolute shrinkage and selection operator) and BSS (backward stepwise selection) methods together to select the most significant features for the detection of vocal pathologies, in particular amyotrophic lateral sclerosis. Features based on fine-frequency cepstral coefficients, traditionally used in speech signal processing, and features based on discrete estimation of the autoregressive spectrum envelope are used. Spectral features based on the autoregressive process envelope spectrum are extracted using the generative method, which involves calculating a discrete Fourier transform of the report sequence generated using the autoregressive model of the input voice signal. The sequence is generated by the autoregressive model so as to account for the periodic nature of the Fourier transform. This improves the accuracy of the spectrum estimation and reduces the spectral leakage effect. Using LASSO in conjunction with BSS allowed us to improve the classification efficiency using a smaller number of features as compared to using the LASSO method alone.

Save Icon
Up Arrow
Open/Close
  • Ask R Discovery Star icon
  • Chat PDF Star icon

AI summaries and top papers from 250M+ research sources.