• All Solutions All Solutions Caret
    • Editage

      One platform for all researcher needs

    • Paperpal

      AI-powered academic writing assistant

    • R Discovery

      Your #1 AI companion for literature search

    • Mind the Graph

      AI tool for graphics, illustrations, and artwork

    • Journal finder

      AI-powered journal recommender

    Unlock unlimited use of all AI tools with the Editage Plus membership.

    Explore Editage Plus
  • Support All Solutions Support
    discovery@researcher.life
Discovery Logo
Sign In
Paper
Search Paper
Cancel
Pricing Sign In
  • My Feed iconMy Feed
  • Search Papers iconSearch Papers
  • Library iconLibrary
  • Explore iconExplore
  • Ask R Discovery iconAsk R Discovery Star Left icon
  • Chat PDF iconChat PDF Star Left icon
  • Chrome Extension iconChrome Extension
    External link
  • Use on ChatGPT iconUse on ChatGPT
    External link
  • iOS App iconiOS App
    External link
  • Android App iconAndroid App
    External link
  • Contact Us iconContact Us
    External link
Discovery Logo menuClose menu
  • My Feed iconMy Feed
  • Search Papers iconSearch Papers
  • Library iconLibrary
  • Explore iconExplore
  • Ask R Discovery iconAsk R Discovery Star Left icon
  • Chat PDF iconChat PDF Star Left icon
  • Chrome Extension iconChrome Extension
    External link
  • Use on ChatGPT iconUse on ChatGPT
    External link
  • iOS App iconiOS App
    External link
  • Android App iconAndroid App
    External link
  • Contact Us iconContact Us
    External link

Related Topics

  • Markov Chain Monte Carlo Algorithm
  • Markov Chain Monte Carlo Algorithm
  • Markov Chain Monte Carlo Sampling
  • Markov Chain Monte Carlo Sampling
  • Markov Chain Monte Carlo
  • Markov Chain Monte Carlo

Articles published on Gibbs sampling

Authors
Select Authors
Journals
Select Journals
Duration
Select Duration
5368 Search results
Sort by
Recency
  • New
  • Research Article
  • 10.1093/geroni/igaf120
Advanced topic modeling with large language models: analyzing social media content from dementia caregivers
  • Dec 26, 2025
  • Innovation in Aging
  • Weiqing He + 10 more

Background and ObjectivesWhile traditional topic modeling methods have been applied to analyze social media content from dementia caregivers, they often struggle with semantic understanding and coherent topic generation. This study explores the direct application of large language models (LLMs) for topic modeling of caregiver tweets, aiming to leverage their advanced semantic comprehension capabilities.Research Design and MethodsWe analyzed 231 870 tweets from dementia caregivers after preprocessing using ChatGPT as the primary topic modeling tool. To address context length limitations, we developed a 2-stage approach: first splitting the dataset into 226 batches of 1000 tweets each for initial topic extraction, then combining these results through a second-stage prompt for final topic synthesis. We compared our approach against 11 baseline methods, including Latent Dirichlet Allocation (LDA), Gibbs Sampling Dirichlet Multinomial Mixture Model (GSDMM), their term-weighted variants, and state-of-the-art BERTopic models. Topic quality was evaluated using Sentence-BERT-based coherence scores, and topic comprehensiveness was assessed through both ChatGPT and human expert evaluation.ResultsOur LLM-based approach achieved a coherence score of 0.358, significantly outperforming all baseline methods. Traditional approaches like GSDMM (0.317) and LDA (0.320), their term-weighted variants (ranging from 0.264 to 0.302), and BERTopic variants (approximately 0.30) showed lower coherence scores. The 2-stage batching strategy effectively handled the large dataset while maintaining topic quality and representativeness. Expert evaluation confirmed the topics’ relevance to caregiver experiences and their comprehensive coverage of key themes.Discussion and ImplicationsThis study introduces a novel methodology for applying LLMs to large-scale topic modeling tasks, demonstrating superior performance over traditional and state-of-the-art approaches. The significant improvement in coherence scores suggests that LLMs can better capture the semantic relationships within topics. Our approach addresses key challenges in context length limitations and prompt engineering, while providing more coherent and interpretable insights into caregiver experiences that can inform targeted support strategies.

  • New
  • Research Article
  • 10.1111/obes.70034
Bayesian Estimation of Treatment Effects in Interactive Fixed Effects Models
  • Dec 22, 2025
  • Oxford Bulletin of Economics and Statistics
  • Osman Doğan + 1 more

ABSTRACT In this study, we suggest an imputation approach for estimating treatment effect parameters when untreated potential outcomes follow a panel data model that has both interactive fixed effects (IFE) and additive two‐way fixed effects. In settings with common treatment timing and staggered treatment adoption, we consider a hybrid approach involving classical and Bayesian methods. First, using the classical random sampling approach across units, we show that treatment effect parameters are identified in our setting under a selection on observables and unobservables assumption. We then suggest an efficient Gibbs sampler for estimating the treatment effect parameters using our suggested imputation approach. We consider two Bayesian methods for selecting the number of factors in the postulated model for untreated potential outcomes. We provide simulation evidence showing that our imputation approach performs satisfactorily. In an empirical application, we use our approach to study the causal effect of police presence on crime.

  • Research Article
  • 10.1080/00036846.2025.2597474
Beyond single-network assumptions: structural uncertainty and systemic risk in financial networks
  • Dec 4, 2025
  • Applied Economics
  • Congyuan Pang + 4 more

ABSTRACT This study develops a systemic risk assessment framework that explicitly incorporates structural uncertainty in interbank networks. The framework combines four network typologies (random, scale-free, hierarchical, and core-periphery) with Gibbs sampling, and is applied to 37 listing Chinese banks from Q3 2019 to Q3 2022. Results show that systemic risk assessments differ substantially across network structures. Under selected shocks, core-periphery networks exhibit the highest risk, while random networks rank lowest; under random shocks, this ordering reverses. Core-periphery and random networks typically yield the maximum and minimum systemic risk estimates under the examined model settings, particularly under selected shocks, providing practical intervals for robust evaluation under uncertainty. Bank characteristics further interact with network structures to shape contagion outcomes. By shifting the paradigm from assuming known structures to managing structural uncertainty, the study offers both methodological innovation and actionable insights for macroprudential regulation.

  • Research Article
  • 10.1371/journal.pone.0332607.r004
JSTMapp: A web-based joint spatiotemporal modelling and mapping application for epidemiologists
  • Dec 2, 2025
  • PLOS One
  • Alfred Ngwira + 6 more

Disease mapping models help create disease risk maps, which public health policymakers can use to design disease control and monitoring programmes. These models are now routinely implemented using spatial statistical software packages that use frequentist estimation methods, such as SaTScan and HDSpatialScan, and Bayesian estimation methods, such as the Windows version of Bayesian inference using Gibbs sampling (WinBUGS) and R integrated nested Laplace approximation (INLA). We aimed to develop a user-friendly joint disease spatiotemporal modelling and mapping application (JSTMapp) for epidemiologists and health statistics analysts based on Bayesian methods. Using the R package Shiny and utilising the proven and embedded joint spatial modelling technology in the Bayesian statistical software INLA, we developed the JSTMapp. To illustrate its usage, we used cattle bovine tuberculosis (BTB) and human extrapulmonary tuberculosis (EPTB) data in Africa. The application enables the estimation, mapping, and visualisation of both disease-specific and general spatial and temporal risk factors. It also can evaluate spatial, temporal and spatiotemporal correlations. Additionally, exploratory analyses can be performed, such as mapping the standardised disease incidence ratio. The application showed improved performance when launched from GitHub R as opposed to online from the Shiny server. Improving performance from online servers may seek to use personal servers other than Shiny.

  • Research Article
  • 10.1371/journal.pone.0332607
JSTMapp: A web-based joint spatiotemporal modelling and mapping application for epidemiologists.
  • Dec 2, 2025
  • PloS one
  • Alfred Ngwira + 3 more

Disease mapping models help create disease risk maps, which public health policymakers can use to design disease control and monitoring programmes. These models are now routinely implemented using spatial statistical software packages that use frequentist estimation methods, such as SaTScan and HDSpatialScan, and Bayesian estimation methods, such as the Windows version of Bayesian inference using Gibbs sampling (WinBUGS) and R integrated nested Laplace approximation (INLA). We aimed to develop a user-friendly joint disease spatiotemporal modelling and mapping application (JSTMapp) for epidemiologists and health statistics analysts based on Bayesian methods. Using the R package Shiny and utilising the proven and embedded joint spatial modelling technology in the Bayesian statistical software INLA, we developed the JSTMapp. To illustrate its usage, we used cattle bovine tuberculosis (BTB) and human extrapulmonary tuberculosis (EPTB) data in Africa. The application enables the estimation, mapping, and visualisation of both disease-specific and general spatial and temporal risk factors. It also can evaluate spatial, temporal and spatiotemporal correlations. Additionally, exploratory analyses can be performed, such as mapping the standardised disease incidence ratio. The application showed improved performance when launched from GitHub R as opposed to online from the Shiny server. Improving performance from online servers may seek to use personal servers other than Shiny.

  • Research Article
  • 10.3168/jds.2025-26646
Genetic parameters of mid-infrared-predicted methane production and its relationship with production traits in Walloon Holstein dairy cows.
  • Dec 1, 2025
  • Journal of dairy science
  • H Atashi + 5 more

Genetic parameters of mid-infrared-predicted methane production and its relationship with production traits in Walloon Holstein dairy cows.

  • Research Article
  • 10.18860/cauchy.v10i2.31626
Bayesian Geographically Weighted Generalized Poisson Regression Modeling on Maternal Mortality in NTT in 2022
  • Nov 30, 2025
  • CAUCHY: Jurnal Matematika Murni dan Aplikasi
  • Dewi Ratnasari Wijaya + 2 more

Maternal mortality is a crucial indicator of healthcare quality, particularly in East Nusa Tenggara (NTT) Province, which still records high mortality rates with significant spatial variation. This study aims to model maternal mortality in NTT in 2022 using the Bayesian Geographically Weighted Generalized Poisson Regression (BGWGPR) approach. This method integrates spatial weighting techniques with Bayesian parameter estimation through Gibbs Sampling to address spatial data characterized by overdispersion. Significant factors, including pregnant women's visits to healthcare facilities (K1), were found to influence the distribution of maternal deaths across districts in NTT. The model identifies that visits to healthcare facilities (K1) (X_1) are significant across all regions, while the variable for pregnant women receiving Tetanus Toxoid (X_3) is only significant in Alor and Timor Tengah Selatan. This model not only provides insights into determining factors but also helps identify priority areas for intervention. Therefore, this study contributes to evidence-based health policy-making aimed at reducing maternal mortality in NTT. The BGWGPR approach proves to be relevant for analyzing complex spatial data and can be applied to other epidemiological cases.

  • Research Article
  • 10.1038/s41467-025-65765-1
Polynomial-time quantum Gibbs sampling for the weak and strong coupling regime of the Fermi-Hubbard model at any temperature
  • Nov 28, 2025
  • Nature Communications
  • Štěpán Šmíd + 3 more

Quantum computers hold the potential to revolutionise the simulation of quantum many-body systems, with profound implications for fundamental physics and applications like molecular and material design. However, demonstrating quantum advantage in simulating quantum systems of practical relevance remains a significant challenge. In this work, we introduce a quantum algorithm for preparing Gibbs states of interacting fermions on a lattice with provable polynomial resource requirements. Our approach builds on recent progress in theoretical computer science that extends classical Markov chain Monte Carlo methods to the quantum domain. We derive a bound on the mixing time for quantum Gibbs state preparation by showing that the generator of the quantum Markovian evolution is gapped at any temperature up to a maximal interaction strength. This enables the efficient preparation of low-temperature states of weakly interacting fermions and the calculation of their free energy. We present exact numerical simulations for small system sizes that support our results and identify well-suited algorithmic choices for simulating the Fermi-Hubbard model beyond our rigorous guarantees.

  • Research Article
  • 10.1002/sim.70326
A Bayesian Parametric and Nonparametric Approach for the Imputation of Multivariate Left-Censored Data Due to Limit of Detection.
  • Nov 27, 2025
  • Statistics in medicine
  • Federico L Perlino + 3 more

Left-censored observations due to limits of detection and/or quantification are common in clinical and epidemiologic research when continuous predictors are assessed from human specimens. In these settings, values below a certain threshold are not detectable in laboratory analysis and are reported as missing in the dataset. Classical imputation approaches have mostly relied on imputing the same number for all non-detected samples, thus compromising the continuous nature of the censored variables and affecting their variability and potential inclusion in regression modeling. Continuous imputations have been presented, but generally focusing on a single variable at the time. It is common, moreover, for the same human specimen to be used for the quantification of several biomarkers or exposures simultaneously, thus resulting in a complex set of multivariate and possibly correlated left-censored observations. To the best of our knowledge, there is no established framework that flexibly accounts for the real-world complexity of these data. We propose a Bayesian multiple imputation (MI) approach that relies on the introduction of multivariate latent variables to handle multivariate left-censored data. We present a general framework, accommodating both a parametric approach, assuming multivariate normality of the data, and a nonparametric approach, modeling observations by means of a location Dirichlet process mixture of multivariate normal kernels. Both approaches are implemented through a Gibbs sampling scheme. The performances of our approach are investigated with a simulation study based on environmental exposures, and illustrated by analyzing a real dataset on cardiovascular biomarkers.

  • Research Article
  • 10.1007/s11222-025-10781-w
A note on auxiliary mixture sampling for Bayesian Poisson models
  • Nov 24, 2025
  • Statistics and Computing
  • Aldo Gardini + 2 more

Abstract Bayesian hierarchical Poisson models are an essential tool for analyzing count data. However, designing efficient algorithms to sample from the posterior distribution of the target parameters remains a challenging task. Auxiliary mixture sampling algorithms have been proposed to this aim. They involve two steps of data augmentation: the first leverages the theory of Poisson processes, and the second approximates the residual distribution of the resulting model through a mixture of Gaussian distributions. In this way, an approximate Gibbs sampler can be implemented. This strategy is particularly beneficial for latent Gaussian models, as it allows one to exploit the sparsity of the precision matrix associated with the random effects and to efficiently incorporate linear constraints. In this paper, we focus on the accuracy of the approximation step, highlighting scenarios where the mixture fails to represent accurately the true underlying distribution, leading to a lack of convergence in the algorithm. We outline key features to monitor, in order to assess if the approximation performs as intended. Building on this, we propose a robust version of the auxiliary mixture sampling algorithm. Our approach includes mechanisms for detecting approximation failures and introduces an enhanced approximation of the right tail of the auxiliary variable distribution, supplemented by a Metropolis-Hastings correction step when needed. Finally, we evaluate the proposed algorithm together with the original mixture sampling algorithms on both simulated and real datasets.

  • Research Article
  • 10.1080/00401706.2025.2574417
Supervised Learning with Inter- and Intra-Dependence in Multilayer Networks with Applications in Security Systems Analysis
  • Nov 24, 2025
  • Technometrics
  • Jose Rodriguez-Acosta + 3 more

Multilayer networks are increasingly used in security systems engineering to represent distinct domains of protection, such as physical, digital, human, and infrastructure layers. Each layer is an undirected network, depicted as a symmetric matrix, where nodes correspond to entities and cell values denote their associations across different contexts. This article introduces a Bayesian supervised learning framework for predicting continuous outcomes from multilayer network predictors. Unlike existing methods, it leverages both inter- and intra-layer dependencies using low-rank coefficient models shared across layers. A structured variable selection prior enables identification of influential nodes and edges while maintaining computational efficiency. We demonstrate the framework on Sandia National Laboratories security network data, accurately predicting time to threat detection and highlighting statistically significant nodes. Empirical results show our method outperforms existing approaches in inference and prediction. Supplementary material provides additional simulations, Gibbs sampler construction, and posterior convergence analyses.

  • Research Article
  • 10.1111/jbg.70029
Genetic Parameters of Age at Conception in Nellore Females Using Threshold Models.
  • Nov 17, 2025
  • Journal of animal breeding and genetics = Zeitschrift fur Tierzuchtung und Zuchtungsbiologie
  • Raimundo Nonato Colares Camargo Júnior + 10 more

Age at conception is a critical factor in the intensification of production systems. It is crucial for intensifying production, optimising reproductive efficiency and boosting Nellore female productivity for genetic progress. However, the low heritability estimates currently limit the effectiveness of selection responses, thereby underscoring the necessity for more precise genetic parameter estimation methods. The aim of this study was to evaluate the genetic associations between body weights at weaning and yearling, and age at conception, in Nellore females. The methodology employed in this study involved the utilisation of a mixed polychotomous threshold model, a methodological framework that has been deemed appropriate for the analysis of categorical variables characterised by an underlying continuous distribution. The records of body weights at weaning and yearling, in conjunction with female age at conception, from 796 animals, were treated as a categorical variable associated with conception success. The (co)variance components were estimated via Bayesian inference using a Gibbs sampler. The mean heritability values were 0.43 (0.27; 0.60) for weaning weight, 0.63 (0.46; 0.81) for yearling weight, and 0.19 (0.06; 0.40) for the categorical variable of age at conception. While body weights exhibited a high additive genetic correlation were (0.79; 95% CI: 0.57; 0.96), correlations were lower between the categorical variable and weaning (-0.21; 95% CI: -0.75; 0.28) and yearling (0.34; 95% CI: -0.14; 0.71) weights. The study concluded that indicators of age at conception should incorporate additional selective criteria beyond body weight in order to improve the probability of conception.

  • Research Article
  • 10.1093/biostatistics/kxaf028
Bayesian mapping of mortality clusters
  • Nov 6, 2025
  • Biostatistics (Oxford, England)
  • Andrea Sottosanti + 3 more

SummaryDisease mapping analyses the distribution of several disease outcomes within a territory. Primary goals include identifying areas with unexpected changes in mortality rates, studying the relation among multiple diseases, and dividing the analysed territory into clusters based on the observed levels of disease incidence or mortality. In this work, we focus on detecting spatial mortality clusters, that occur when neighbouring areas within a territory exhibit similar mortality levels due to one or more diseases. When multiple causes of death are examined together, it is relevant to identify not only the spatial boundaries of the clusters but also the diseases that lead to their formation. However, existing methods in literature struggle to address this dual problem effectively and simultaneously. To overcome these limitations, we introduce perla, a multivariate Bayesian model that clusters areas in a territory according to the observed mortality rates of multiple causes of death, also exploiting the information of external covariates. Our model incorporates the spatial structure of data directly into the clustering probabilities by leveraging the stick-breaking formulation of the multinomial distribution. Additionally, it exploits suitable global-local shrinkage priors to ensure that the detection of clusters depends on diseases showing concrete increases or decreases in mortality levels, while excluding uninformative diseases. We propose a Markov chain Monte Carlo algorithm for posterior inference that consists of closed-form Gibbs sampling moves for nearly every model parameter, without requiring complex tuning operations. This work is primarily motivated by a case study on the territory of a local unit within the Italian public healthcare system, known as ULSS6 Euganea. To demonstrate the flexibility and effectiveness of our methodology, we also validate perla with a series of simulation experiments and an extensive case study on mortality levels in U.S. counties.

  • Research Article
  • 10.1080/00949655.2025.2575872
Variable selection for zero-inflated Poisson regression model
  • Nov 5, 2025
  • Journal of Statistical Computation and Simulation
  • Haichao Zhang + 1 more

The paper implements an efficient algorithm for variable selection in the zero-inflated Poisson regression model based on Pólya-Gamma latent variables. This leads to a closed form posterior conditional distribution under a logistic (logit) link function in modelling the excessive zeros and helps overcome the computational disadvantage of the logit link compared to a probit link. The feasibility of Gibbs sampling of the regression coefficients is particularly important in variable selection as it removes the tuning burden in the standard Metropolis-Hastings algorithm and improves the convergence. Simulation studies implement the proposed algorithm and illustrates how the choice of link functions, between the probit and the logit links, influences the variable selection and prediction results. The model comparison is also carried out in the application to a German Healthcare dataset.

  • Research Article
  • 10.1080/00949655.2025.2583476
Bayesian estimation analysis of partially linear varying coefficient skew-normal spatial autoregression models
  • Nov 4, 2025
  • Journal of Statistical Computation and Simulation
  • Mengxin Tian + 3 more

This paper presents a novel partially linear varying coefficient skew-normal spatial autoregression model for accommodating actual data that exhibit skewed tail behaviour, which may not be well modelled by normally distributed errors. We first utilize Bayesian P-splines to effectively approximate the nonparametric components of the model. Along with the Metropolis importance sampling algorithm for the nonparametric components of the model, we propose an effective Markov Chain Monte Carlo algorithm that integrates Gibbs sampling and the Metropolis-Hastings algorithm to generate posterior samples from the joint posterior distribution, thus facilitating statistical inference. We carry out extensive simulation studies to investigate the finite sample performance of the proposed method. Numerical results show that the proposed method is capable of effectively addressing the characteristics of skewed data and yielding conclusions that align well with the simulations under consideration. Finally, a real-data application is provided for illustrative purposes.

  • Research Article
  • Cite Count Icon 1
  • 10.1088/1475-7516/2025/11/041
On the computational feasibility of Bayesian end-to-end analysis of LiteBIRD simulations within Cosmoglobe
  • Nov 1, 2025
  • Journal of Cosmology and Astroparticle Physics
  • R Aurvik + 99 more

We assess the computational feasibility of end-to-end Bayesian analysis of the JAXA-led LiteBIRD experiment by analysing simulated time ordered data (TOD) for a subset of detectors through the Cosmoglobe and Commander3 framework. The data volume for the simulated TOD is 1.55 TB, or 470 GB after Huffman compression. From this we estimate a total data volume of 238 TB for the full three year mission, or 70 TB after Huffman compression. We further estimate the running time for one Gibbs sample, from TOD to cosmological parameters, to be approximately 3000 CPU hours. The current simulations are based on an ideal instrument model, only including correlated 1/f noise. Future work will consider realistic systematics with full end-to-end error propagation. We conclude that these requirements are well within capabilities of future high-performance computing systems.

  • Research Article
  • 10.1016/j.jevs.2025.105733
Genetic parameter estimates of performance traits in Iranian Thoroughbred race horses using a Bayesian approach.
  • Nov 1, 2025
  • Journal of equine veterinary science
  • M Taned + 4 more

Genetic parameter estimates of performance traits in Iranian Thoroughbred race horses using a Bayesian approach.

  • Research Article
  • 10.1093/restud/rdaf093
Demand Analysis under Latent Choice Constraints
  • Oct 29, 2025
  • Review of Economic Studies
  • Nikhil Agarwal + 1 more

Abstract Consumer choices are constrained in many markets due to either supply-side rationing or information frictions. Examples include matching markets for schools and colleges; entry-level labor markets; limited brand awareness and inattention in consumer markets; and selective admissions to healthcare services. We analyze a general random utility model for consumer preferences that allows for endogenous characteristics and a reduced-form choice-set formation rule that can be derived from models of the examples described above. We show non-parametric identification of this model, propose an estimator, and apply these methods to study admissions in the market for kidney dialysis in California. Our identification results require two sets of instruments, one that only affects consumer preferences and the other that only affects choice sets. We show that both instruments are necessary for identification. These results also suggest tests of choice-set constraints, which we apply to the dialysis market. We find that dialysis facilities are less likely to admit new patients when they have a higher-than-normal caseload and that patients are more likely to travel further when nearby facilities have high caseloads. Finally, we estimate consumers’ preferences and facilities’ rationing rules using a Gibbs sampler.

  • Research Article
  • 10.17654/0972361725069
A COMPARATIVE STUDY OF TOPIC MODELLING TECHNIQUES AND SVM CLASSIFICATION FOR THE EXTRACTION OF EMERGING THEMES ON IMMUNITY FROM CORD-19
  • Oct 25, 2025
  • Advances and Applications in Statistics
  • S K M Jeyasree + 1 more

The objective of this study is to explore thematic structures and classify abstracts related to innate and adaptive immunity extracted from the CORD-19 dataset. The study aims to evaluate the effectiveness of various topic modelling and classification techniques for uncovering key topics and patterns in the dataset. For topic modelling, Latent Dirichlet Allocation (LDA), Latent Semantic Analysis (LSA), and Non-negative Matrix Factorization (NMF) were employed. Additionally, a Support Vector Machine (SVM) classifier with LSA-reduced features was applied to evaluate classification performance across various topic numbers (k-values). To address class imbalance, Synthetic Minority Oversampling Technique (SMOTE) was used. The SVM model, trained with an RBF kernel, achieved high classification performance, as evidenced by the confusion matrix, ROC curve, and classification report. The performance of models was assessed using precision, recall and F1-score. Research findings included identifying top terms from topic models and extracting term from the SVM model. The results demonstrated that LDA with Gibbs sampling, variational EM and SVM with LSA reduction outperformed other methods in terms of classification accuracy and topic coherence. The study highlights the potential of combining topic modelling and machine learning techniques for analyzing scientific literature. The findings contribute to understanding emerging themes in innate and adaptive immunity research. This work offers valuable insights for researchers and healthcare professionals by enabling efficient exploration of large-scale biomedical datasets and supporting further research on immune responses.

  • Research Article
BASIN: Bayesian mAtrix variate normal model with Spatial and sparsIty priors in Non-negative deconvolution
  • Oct 24, 2025
  • ArXiv
  • Jiasen Zhang + 3 more

Spatial transcriptomics allows researchers to visualize and analyze gene expression within the precise location of tissues or cells. It provides spatially resolved gene expression data but often lacks cellular resolution, necessitating cell type deconvolution to infer cellular composition at each spatial location. In this paper we propose BASIN for cell type deconvolution, which models deconvolution as a nonnegative matrix factorization (NMF) problem incorporating graph Laplacian prior. Rather than find a deterministic optima like other recent methods, we propose a matrix variate Bayesian NMF method with nonnegativity and sparsity priors, in which the variables are maintained in their matrix form to derive a more efficient matrix normal posterior. BASIN employs a Gibbs sampler to approximate the posterior distribution of cell type proportions and other parameters, offering a distribution of possible solutions, enhancing robustness and providing inherent uncertainty quantification. The performance of BASIN is evaluated on different spatial transcriptomics datasets and outperforms other deconvolution methods in terms of accuracy and efficiency. The results also show the effect of the incorporated priors and reflect a truncated matrix normal distribution as we expect.

  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • .
  • .
  • .
  • 10
  • 1
  • 2
  • 3
  • 4
  • 5

Popular topics

  • Latest Artificial Intelligence papers
  • Latest Nursing papers
  • Latest Psychology Research papers
  • Latest Sociology Research papers
  • Latest Business Research papers
  • Latest Marketing Research papers
  • Latest Social Research papers
  • Latest Education Research papers
  • Latest Accounting Research papers
  • Latest Mental Health papers
  • Latest Economics papers
  • Latest Education Research papers
  • Latest Climate Change Research papers
  • Latest Mathematics Research papers

Most cited papers

  • Most cited Artificial Intelligence papers
  • Most cited Nursing papers
  • Most cited Psychology Research papers
  • Most cited Sociology Research papers
  • Most cited Business Research papers
  • Most cited Marketing Research papers
  • Most cited Social Research papers
  • Most cited Education Research papers
  • Most cited Accounting Research papers
  • Most cited Mental Health papers
  • Most cited Economics papers
  • Most cited Education Research papers
  • Most cited Climate Change Research papers
  • Most cited Mathematics Research papers

Latest papers from journals

  • Scientific Reports latest papers
  • PLOS ONE latest papers
  • Journal of Clinical Oncology latest papers
  • Nature Communications latest papers
  • BMC Geriatrics latest papers
  • Science of The Total Environment latest papers
  • Medical Physics latest papers
  • Cureus latest papers
  • Cancer Research latest papers
  • Chemosphere latest papers
  • International Journal of Advanced Research in Science latest papers
  • Communication and Technology latest papers

Latest papers from institutions

  • Latest research from French National Centre for Scientific Research
  • Latest research from Chinese Academy of Sciences
  • Latest research from Harvard University
  • Latest research from University of Toronto
  • Latest research from University of Michigan
  • Latest research from University College London
  • Latest research from Stanford University
  • Latest research from The University of Tokyo
  • Latest research from Johns Hopkins University
  • Latest research from University of Washington
  • Latest research from University of Oxford
  • Latest research from University of Cambridge

Popular Collections

  • Research on Reduced Inequalities
  • Research on No Poverty
  • Research on Gender Equality
  • Research on Peace Justice & Strong Institutions
  • Research on Affordable & Clean Energy
  • Research on Quality Education
  • Research on Clean Water & Sanitation
  • Research on COVID-19
  • Research on Monkeypox
  • Research on Medical Specialties
  • Research on Climate Justice
Discovery logo
FacebookTwitterLinkedinInstagram

Download the FREE App

  • Play store Link
  • App store Link
  • Scan QR code to download FREE App

    Scan to download FREE App

  • Google PlayApp Store
FacebookTwitterTwitterInstagram
  • Universities & Institutions
  • Publishers
  • R Discovery PrimeNew
  • Ask R Discovery
  • Blog
  • Accessibility
  • Topics
  • Journals
  • Open Access Papers
  • Year-wise Publications
  • Recently published papers
  • Pre prints
  • Questions
  • FAQs
  • Contact us
Lead the way for us

Your insights are needed to transform us into a better research content provider for researchers.

Share your feedback here.

FacebookTwitterLinkedinInstagram
Cactus Communications logo

Copyright 2026 Cactus Communications. All rights reserved.

Privacy PolicyCookies PolicyTerms of UseCareers