Articles published on Recent Advances
Authors
Select Authors
Journals
Select Journals
Duration
Select Duration
167023 Search results
Sort by Recency
- Research Article
- 10.1038/s44298-026-00173-w
- Feb 16, 2026
- Npj viruses
- Ayse Agac + 5 more
Respiratory syncytial virus (RSV) is a leading cause of respiratory tract infections, leading to significant morbidity, hospitalizations, and mortality among high-risk populations. Despite the recent advent of vaccines and monoclonal antibodies, options to treat RSV infections are limited. Detailed understanding at the molecular level of virus-host interactions associated with disease severity may aid development of novel intervention strategies. In this study, we examined the role of transcription factor STAT1 in regulating cholesterol metabolism during RSV infection of epithelial-like cells. We demonstrated that CRISPR/Cas9-mediated STAT1 knock-out affected activation of the SREBP-SCAP cholesterol biosynthesis pathway, leading to intracellular cholesterol accumulation and increased RSV-induced syncytia formation. Pharmacological reduction of cholesterol levels blunted RSV-induced syncytia formation and affected the stability of the RSV fusion protein. These findings reveal a STAT1-dependent immune-metabolic pathway that constrains RSV dissemination through syncytia formation, which could be a novel target for intervention strategies.
- Research Article
- 10.21873/cdp.10534
- Feb 1, 2026
- Cancer diagnosis & prognosis
- Ryotaro Watanabe + 8 more
The recent advent of immunotherapy has improved long-term survival in patients with unresectable esophageal cancer. However, second primary malignancies (SPMs) are expected to develop in these patients. We investigated the incidence of SPMs in patients with unresectable advanced esophageal cancer. We retrospectively reviewed the records of patients with unresectable esophageal cancer, including those with locally advanced and metastatic disease, who were treated at the Kindai University Hospital between 2016 and 2022. The incidence of SPMs was determined among long-term survivors. The cumulative incidence of SPMs was estimated using the Gray subdistribution method, treating death as a competing risk. Among the 211 patients with unresectable esophageal cancer, 45 (21%) met the criteria for long-term survival. Five (11%) were diagnosed with SPMs after a median follow-up of 3.7 years. The cumulative incidences of SPMs after 3, 5, and 8 years were 7, 10, and 14%, respectively. The types of SPMs included diffuse large B-cell lymphoma and urothelial, lung, prostate, and thyroid cancers. All SPMs were cured with definitive treatment, and no deaths were attributed to them. Even among patients with unresectable esophageal cancer, long-term survivors had a measurable rate of SPMs. This highlights the importance of post-treatment surveillance for SPMs.
- Research Article
- 10.25303/213rjbt044050
- Jan 31, 2026
- Research Journal of Biotechnology
- Ramesh Malothu + 1 more
Proteins are the essential biomolecules of life. They are the basic building blocks made of the same 20 amino acids. Understanding a protein’s structure plays a pivotal role in revealing its function in the genome of an organism. Studying the protein’s native conformation could pave the way to design novel drugs and help in bringing cure to some serious health ailments. Our current research work focuses on analyzing the structure of a plant protein called Maturase K of Annona muricata, a medicinally potent plant belonging to the family Annonaceae. This wonder plant and its plant extracts have taken center stage in the scientific and medical research due to its myriad medicinal properties present in its phytochemical compounds that help to cure or to control several types of infectious diseases including certain cancers, as in colon cancer. Recent advent of the Artificial Intelligence tools and its explosive foray into the field of Bioinformatics has revolutionized the field of structural biology, through its protein structure prediction tool called ‘AlphaFold’. This tool delivers highly accurate structure predictions of vast numbers of proteins, which otherwise would have been time consuming via experimental determination. The current study focuses on the use of one such AI based structure prediction tool called ‘AlphaFold’ to predict and elucidate the 3D structure of our query protein ‘Maturase K’ and analyze the protein's functional domains using the ‘ConSurf server’. Both the operations were performed solely based on its amino acids and the residue conservation in the secondary structural elements. The conserved and non-conserved amino acids during the course of evolution might further play a key role in establishing an evolutionary relationship and evolutionary divergence among its members and also helps to predict and understand the protein’s stability leaving a huge scope in finding the possible binding sites to design drugs to alleviate diseases including cancer and predict new protein functionalities
- Research Article
- 10.1145/3777552
- Jan 19, 2026
- ACM Transactions on Human-Robot Interaction
- Cynthia Matuszek + 7 more
The comparatively recent advent of Large Language Models (LLMs) has resulted in a wide array of new capabilities and components relevant to Human–Robot Interaction (HRI) researchers. LLMs are being applied to vision, manipulation, planning, reasoning, learning, and HRI problems, frequently as “Scarecrows,” in which LLMs serve as black box modules integrated into robot architectures for the purpose of quickly enabling full-pipeline solutions. However, despite this explosion of applications, general questions remain about the best ways to incorporate LLMs into robot architectures, appropriate safety and guardrail considerations, and, critically, how to report properly on HRI research that involves LLMs. In this article, we explore the question of reporting guidelines for HRI researchers who utilize Scarecrows in robot architectures. We identify five key stakeholder groups in the HRI research process, discuss what information each group needs from HRI researchers, and identify appropriate mechanisms for conveying that information from HRI researchers to stakeholders either directly or indirectly. We contribute a set of suggested guidelines regarding what information should be included when researchers disseminate information about HRI research that uses LLMs.
- Research Article
- 10.1021/acs.jcim.5c02299
- Jan 12, 2026
- Journal of chemical information and modeling
- Yi Yang + 17 more
The extraction of the metal-organic framework synthesis route from the literature has been crucial for the rational MOFs design with desirable functionality. The recent advent of large language models (LLMs) provides a disruptive new solution to this long-standing problem. While the latest research on chemical data extraction mostly adopts zero-shot LLMs lacking specialized material knowledge or fine-tuned LLMs inducing high cost and inflexibility in our scenario, we introduce in this work the MaterialBrain pipeline that optimizes the few-shot LLMs in-context learning technique to accurately extract synthesis routes and design high-performance materials. First, a batch-epoch-iteration-based human-AI data curation approach is proposed to optimize both the quantity and quality of annotation database for the synthesis extraction task, which are pivotal to MaterialBrain's performance. Second, an information retrieval algorithm is applied to pick and quantify a few-shot demonstrations from the annotation database for each extraction. Over three data sets randomly sampled from nearly 90,000 well-defined MOFs, we conduct triple evaluations to validate our pipeline. The synthesis extraction, structure inference, and material design performance of MaterialBrain significantly outplay zero-shot LLMs and baseline methods. The specific surface area of the lab-synthesized material guided by LLMs surpasses that of 99.2% of MOFs of the same class reported in the literature.
- Research Article
- 10.1016/j.injury.2025.112847
- Jan 1, 2026
- Injury
- Zachary M Bauman + 12 more
Traumatic clamshell thoracotomy closure using plates and screws - A new look for a challenging exposure: A pilot study.
- Research Article
29
- 10.1109/jbhi.2025.3588555
- Jan 1, 2026
- IEEE journal of biomedical and health informatics
- Zhiwen Yang + 5 more
Transformers have revolutionized medical image restoration, but the quadratic complexity still poses limitations for their application to high-resolution medical images. The recent advent of the Receptance Weighted Key Value (RWKV) model in the natural language processing field has attracted much attention due to its ability to process long sequences efficiently. To leverage its advanced design, we propose Restore-RWKV, the first RWKV-based model for medical image restoration. Since the original RWKV model is designed for 1D sequences, we make two necessary modifications for modeling spatial relations in 2D medical images. First, we present a recurrent WKV (Re-WKV) attention mechanism that captures global dependencies with linear computational complexity. Re-WKV incorporates bidirectional attention as basic for a global receptive field and recurrent attention to effectively model 2D dependencies from various scan directions. Second, we develop an omnidirectional token shift (Omni-Shift) layer that enhances local dependencies by shifting tokens from all directions and across a wide context range. These adaptations make the proposed Restore-RWKV an efficient and effective model for medical image restoration. Even a lightweight variant of Restore-RWKV, with only 1.16 million parameters, achieves comparable or even superior results compared to existing state-of-the-art (SOTA) methods. Extensive experiments demonstrate that the resulting Restore-RWKV achieves SOTA performance across a range of medical image restoration tasks, including PET image synthesis, CT image denoising, MRI image super-resolution, and all-in-one medical image restoration.
- Research Article
- 10.1051/epjconf/202635202001
- Jan 1, 2026
- EPJ Web of Conferences
- Stefan Persijn
It has been almost 60 years since the first commercial high-resolution FTIR spectrometer was launched. In gas metrology, such FTIR spectrometers have traditionally been used for a few niche applications, but they have never become a real workhorse. The interest in FTIR has recently revived thanks to new measurement challenges involving multiple reactive gases in applications like CCUS, biogas, and hydrogen quality analysis. However, standard commercial FTIR equipment, such as the gas cell, is typically not fit for purpose for these applications. This paper will discuss some of the necessary modifications to gas cells to exploit the full potential of FTIR as a versatile tool for the selective measurement of reactive gases. Further, common pitfalls in spectral data analysis are discussed. Experimental results on reactive gases in NO 2 and CCUS gas standards are presented to show what can be learnt from FTIR measurements. The paper will conclude with an outlook on whether there is a future for FTIR spectrometers with the recent advent of broadband laser spectrometers with similar multicomponent measurement capabilities.
- Research Article
- 10.63740/j6wtxy53
- Dec 31, 2025
- Journal of Islamic Banking Economics and Policy
- Maryam Saeed + 1 more
Background: The idea of insurance was discovered several millennia before Christ (BC). In the second and third millennia BC, traders from China and Babylonia practiced shifting or dispersing risks. Today, insurance is the foundation of the economy, but expanding its penetration is difficult in emerging nations. The fourth insurance industry revolution in the developed world was sparked by the recent advent of IoT, Big Data, and InsurTech. Objective: To boost insurance coverage in Bangladesh, this study examines the problems with and potential solutions to IoT. Research Methodology: To identify the themes and factors pertaining to problems and solutions in implementing IoT in Bangladesh’s insurance business, this study used a systematic literature review. To find pertinent material from Google Scholar, several keywords were employed. The filtered studies were examined based on inclusion and exclusion standards. Findings: This report outlined many obstacles to IoT adoption in the Bangladesh’s insurance sector as well as potential remedies. The proposals could help policymakers improve the insurance industry service delivery.
- Research Article
- 10.1007/s10142-025-01791-y
- Dec 26, 2025
- Functional & integrative genomics
- Shambhu Krishan Lal + 11 more
Cereals are crucial sources of food for human and animal populations worldwide. Their grain and fodder primarily serve as sources of energy and nutrition. Cereal production is hampered because of the prevalent abiotic stress worldwide. Abiotic stresses such as drought, salinity, extreme temperatures, and heavy metal toxicity significantly reduce global cereal crop production. Previously, traditional breeding and transgenic technology have been promising and potent approaches used to mitigate unfavourable abiotic stresses, enhancing crop production to some extent. The recent advent of more potent genome-editing technologies, particularly Clustered Regularly Interspaced Short Palindromic Repeats (CRISPR), has revolutionized the pace of crop improvement programs. Genome-editing technology using engineered nucleases offers significant opportunities for crop improvement. Genome editing tools include Meganucleases, Zinc Finger Nucleases (ZFN), Transcription activator-like effector nucleases (TALENs), and CRISPR/CRISPR-associated protein (Cas). Among all genome-editing tools, CRISPR/Cas9 has been widely used to improve crop cultivars due to its specificity, simplicity, robustness, and flexibility. Recent progress in genome-editing technology have improved various plant traits in cereals. Among these traits, cereal genotypes have shown substantial advances in the last decade, particularly in enhanced tolerance to abiotic stress, enabled by genome-editing tools. This review summarizes the recently developed cereal cultivars for abiotic stress tolerance that employ different genome-editing technologies, including the most recent additions, prime editing and base editing. These improved cereal cultivars perform better and maintain higher yields under adverse abiotic stresses.
- Research Article
- 10.1002/sd.70583
- Dec 23, 2025
- Sustainable Development
- Timothy Akinwande + 1 more
ABSTRACT The recent impacts of technology and digitalisation on all industrial sectors, including the real estate sector, are significant and cannot be overstated. However, there is a paucity of studies on how technology can improve the provision of affordable housing (AH) in developing economies. While the prevailing challenge of AH provision across the world is an urgent call for innovation in tackling this issue, the recent advent of technology makes it essential to investigate what significant benefits can be derived from employing technology to improve AH provision. It is a practical step to carefully explore expert opinions on the benefits of technology in improving AH finance provision in developing economies. To achieve this, this qualitative study conducted a focus group discussion and semi‐structured interviews with 12 housing experts in Nigeria. Recordings of which were transcribed, coded and analysed with NVivo. Following descriptive, content and thematic analyses of the data, the findings indicate that technology is most useful for record‐keeping, tracking records and proper documentation. Other significant benefits of technology include the introduction of innovations that can improve savings culture towards housing finance and the enhancement of access to loans for housing finance, among other benefits. These insights offer a glimpse into the significant benefits that can be derived from applying technology to enhance AH financing, according to the experts. These insights are informative for all stakeholders interested in AH finance provision, a crucial step towards achieving the Sustainable Development Goals. The study's clustered data are vital for future pro‐poor housing research.
- Research Article
- 10.1002/admt.202502395
- Dec 21, 2025
- Advanced Materials Technologies
- Shi Tang + 5 more
ABSTRACT The light‐emitting electrochemical cell (LEC) is a good fit for scalable ambient‐air coating and printing, since it can deliver efficient emission from a robust three‐layer architecture comprising solely air‐stable and soluble materials. However, a drawback is that hitherto employed emitters for printed LECs are either conjugated polymers that are efficiency and purification limited or small molecules that are difficult to solution process into uniform thin films. The recent advent of dendrimers that emit by thermally activated delayed fluorescence (TADF) promises to address all these issues, since they can efficiently utilize all (both singlet and triplet) excitons for the light emission, be of high purity because of their well‐defined structure, and feature high solubility and good film‐forming capacity by the virtue of being equipped with branched dendrons. Herein, an asymmetric second‐generation TADF‐dendrimer, tBuCz2m2pTRZ , is combined with an ionic‐liquid electrolyte for the formulation of a tuned ink, which is used for a bar‐coating fabrication of uniform LEC active‐material films featuring a high photoluminescence quantum yield of 84%. This opportunity is ultimately utilized for the pioneering demonstration of a bar‐coated TADF‐dendrimer‐LEC, which delivers uniform and bright green luminance of 350 cd m −2 at an external quantum efficiency of 1.2%.
- Research Article
- 10.3758/s13428-025-02898-7
- Dec 8, 2025
- Behavior Research Methods
- Christopher T Kello + 2 more
Neural network modeling has played a central role in psycholinguistic studies of lexical processing, but the recent advent of large language models (LLMs) offers a different approach that may yield new insights into the mental lexicon. Four LLMs were prompted across three experiments to test how they generate psycholinguistic ratings of words in comparison with humans. LLM ratings, averaged across varying list contexts, were found to be highly correlated with human ratings, and differences in correlation strengths were partly explained by differences in rating ambiguity. LLM context manipulations strengthened correlations with human ratings through better calibration, and variability in LLM ratings was correlated with human inter-rater variability. Additional results from testing LLM generation of word naming latencies showed functional deviations from factors that underlie human word naming, indicating that lexical function assembly in LLMs is currently limited by patterns of co-occurrence in textual data. Patterns at finer-grained timescales are needed in the training data to model online lexical processes. We conclude that LLMs used context to guide the assembly of generalized lexical functions, rather than recalling ratings and latencies from training data.Supplementary InformationThe online version contains supplementary material available at 10.3758/s13428-025-02898-7.
- Research Article
16
- 10.1016/j.nanoms.2022.09.004
- Dec 1, 2025
- Nano Materials Science
- Ying Guo + 3 more
Recent advances in Zn–CO2 batteries for the co-production of electricity and carbonaceous fuels
- Research Article
- 10.1093/nar/gkaf1325
- Nov 26, 2025
- Nucleic Acids Research
- Nadejda B Boev + 2 more
The recent advent of long-read whole genome sequencing has enabled us to create an accurate telomere-to-telomere reference genome, construct pangenome graphs, and compile precise catalogs of genomic structural variations (SVs). These comprehensive SV repositories provide an excellent opportunity to explore the role of SVs in genotype-phenotype associations and examine the mechanisms by which SVs are introduced through double-strand break (DSB) repair. Here, we employed comprehensive SV catalogs identified through various short- and long-read whole genome sequencing efforts to infer the underlying mechanisms of SV introduction based on their genomic and epigenomic profiles. Our findings indicate that high local DNA methylation and DNA shape-related features, such as low variations in propeller twist, support the origins of homology-driven SVs. Subsequently, we utilized an active-learning-based unsupervised clustering approach, revealing that homology-dependent SVs show greater evidence of retaining ancestral recombination patterns compared to their homology-independent counterparts. Finally, our comparison of inherited and de novo SVs from healthy populations and rare disease cohorts showed distinct upstream H3K27me3 levels in de novo SVs from individuals with ultra-rare disorders. These findings highlight genome-wide characteristics that may influence the choice of repair mechanisms linked to heritable SV origins.
- Research Article
- 10.1609/aaaiss.v7i1.36886
- Nov 23, 2025
- Proceedings of the AAAI Symposium Series
- Mohamed Ibn Khedher + 2 more
Video Anomaly Detection is a critical task for identifying unusual events in video streams, with applications ranging from public safety surveillance to industrial monitoring. Traditional VAD methods, often based on reconstruction or prediction errors, excel at detecting deviations but typically lack semantic understanding, failing to explain why an event is anomalous. The recent advent of Vision-Language Models and Large Language Models has introduced a new paradigm, enabling systems to interpret and reason about video content in natural language. However, existing VLM/LLM-based approaches often focus either on rich, open-ended description or on structured, rule-based reasoning, but rarely both. In this paper, we address this gap by proposing a novel hybrid framework that synergizes the strengths of descriptive and deductive models. Our approach first leverages a powerful VLM to generate detailed, contextual scene descriptions. These descriptions are then fed into a rule-driven LLM, which uses a pre-induced set of contextual rules to make a final anomaly judgment and provide a human-readable explanation grounded in the specific rule that was violated. We validate our approach on the large-scale UCF-Crime dataset and conduct an analysis of key hyperparameters, including the VLM's input prompt and the number of frames used for description. Our results demonstrate the effectiveness of the proposed architecture and offer insights into building more interpretable, reliable, and context-aware VAD systems.
- Research Article
2
- 10.1016/j.jconrel.2025.114196
- Nov 1, 2025
- Journal of controlled release : official journal of the Controlled Release Society
- Chaichai Nie + 10 more
Nanodrug-based therapeutic interventions for tumor-associated microbiota modulation.
- Research Article
- 10.1007/s10554-025-03513-8
- Nov 1, 2025
- The international journal of cardiovascular imaging
- Takahide Ito + 2 more
Collagen diseases are chronic inflammatory disorders that systematically affect connective tissues of the entire body, including the skin, joints, and kidneys. Cardiac involvement is not uncommon in collagen diseases. Pulmonary hypertension is the most prevalent complication, followed by systolic and diastolic dysfunction, conduction abnormalities, perimyocarditis, and valvular disease. The recent advent of echocardiographic techniques has enabled the detection of subclinical cardiac changes in collagen diseases. In systemic sclerosis, exercise echocardiography can unmask pulmonary artery hypertension. Speckle tracking echocardiography applied to the right ventricle in systemic sclerosis is useful for assessing the degree of chronic right ventricular pressure overload in addition to the irreversible pathological processes affecting the pulmonary vasculature. In systemic lupus erythematosus, three-dimensional echocardiography provides a clearer visualization of Libman-Sacks endocarditis lesions compared to two-dimensional echocardiography. Dermatomyositis, as well as rheumatoid arthritis, has traditionally been associated with pericardial effusion due to perimyocarditis and, more recently, with diastolic dysfunction and myocardial hypertrophy. Here, we present echocardiographic findings for representative types of collagen diseases, highlighting their characteristic features and providing accompanying images for each.
- Research Article
1
- 10.21203/rs.3.rs-7782723/v1
- Oct 13, 2025
- Research Square
- Antonina L Nazarova + 7 more
The recent advent of synthesizable on-demand chemical spaces of drug-like compounds opened new horizons in the discovery of ligands and drug candidates for clinically relevant targets, but exposed the scalability of computational screening as a key bottleneck. The modular V-SYNTHES approach has shown highly efficient > 1000-fold accelerated virtual screening, but its initial implementation was not fully automated, limited to the initial version of Enamine REAL space (11 billion), and its validation was limited to only two targets.Here we present an upgraded V-SYNTHES2 workflow with improved automation features and scalability, expanded REAL Space of 36 billion readily available compounds, and assessing its performance on new, more challenging targets. As the original method, V-SYNTHES2 employs initial docking of the Minimal Enumeration Library (MEL) of fragments that represent all scaffolds and synthons of the REAL space. The best fragments are iteratively enumerated with corresponding synthons, and the intermediates redocked, until the fully enumerated molecules are docked and selected for synthesis. V-SYNTHES2 introduces a new geometry-based CapSelect method, allowing us to fully automate MEL fragment selection based on docking score and optimal binding pose. The method shows excellent enrichment and binding pose reproducibility in computational benchmarks, including challenging targets with shallow pockets, RNA-binding sites, G-protein-coupled receptors (GPCRs), and phospholipid-binding enzymes. Experimental testing shows the utility of this workflow in prospective screening campaigns for two new targets. The fully automated V-SYNTHES2 workflow (https://github.com/KatritchLab/V-SYNTHES2_pipeline/) can be deployed on computing clusters or clouds, offering a powerful tool for effective screening of giga-scale chemical spaces.
- Research Article
4
- 10.31875/2410-4701.2021.08.5
- Oct 2, 2025
- Journal of Material Science and Technology Research
- Di Wang + 4 more
Quarrying and processing of granite produce large amounts of waste residues. Besides being a loss of resources, improper disposal of these wastes results in pollution of the soil, water and air around the dumpsites. The main components of granite waste are quartz, feldspars and a small amount of biotite. Due to its hard and dense texture, high strength, corrosion resistance and wear resistance, granite waste may be recycled into building materials, composite materials and fine ceramics, effectively improving their mechanical properties and durability. By using the flotation process, high value-added products such as potash feldspar and albite may be retrieved from granite waste. Also, granite waste has the potential for application in soil remediation and sewage treatment. This review presents recent advances in granite waste reutilization, and points out the problems associated with its use, and the related countermeasures, indicating the scale of high value-added reutilization of granite waste.