Discovery Logo
Sign In
Search
Paper
Search Paper
Pricing Sign In
  • Home iconHome
  • My Feed iconMy Feed
  • Search Papers iconSearch Papers
  • Library iconLibrary
  • Explore iconExplore
  • Ask R Discovery iconAsk R Discovery Star Left icon
  • Literature Review iconLiterature Review NEW
  • Chat PDF iconChat PDF Star Left icon
  • Citation Generator iconCitation Generator
  • Chrome Extension iconChrome Extension
    External link
  • Use on ChatGPT iconUse on ChatGPT
    External link
  • iOS App iconiOS App
    External link
  • Android App iconAndroid App
    External link
  • Contact Us iconContact Us
    External link
  • Paperpal iconPaperpal
    External link
  • Mind the Graph iconMind the Graph
    External link
  • Journal Finder iconJournal Finder
    External link
Discovery Logo menuClose menu
  • Home iconHome
  • My Feed iconMy Feed
  • Search Papers iconSearch Papers
  • Library iconLibrary
  • Explore iconExplore
  • Ask R Discovery iconAsk R Discovery Star Left icon
  • Literature Review iconLiterature Review NEW
  • Chat PDF iconChat PDF Star Left icon
  • Citation Generator iconCitation Generator
  • Chrome Extension iconChrome Extension
    External link
  • Use on ChatGPT iconUse on ChatGPT
    External link
  • iOS App iconiOS App
    External link
  • Android App iconAndroid App
    External link
  • Contact Us iconContact Us
    External link
  • Paperpal iconPaperpal
    External link
  • Mind the Graph iconMind the Graph
    External link
  • Journal Finder iconJournal Finder
    External link

Related Topics

  • Single View
  • Single View
  • Single Viewpoint
  • Single Viewpoint

Articles published on Multiple Views

Authors
Select Authors
Journals
Select Journals
Duration
Select Duration
4380 Search results
Sort by
Recency
  • New
  • Research Article
  • 10.1016/j.jtherbio.2026.104426
Thermal signatures in breast cancer: Deciphering latent biomarkers through deep learning and explainable AI.
  • Apr 1, 2026
  • Journal of thermal biology
  • Sachin Kansal + 4 more

Thermal signatures in breast cancer: Deciphering latent biomarkers through deep learning and explainable AI.

  • Research Article
  • 10.1007/s44443-026-00601-0
STSyn-BEV: BEV segmentation from surround-view fisheye cameras via spatio-temporal synchronization
  • Mar 9, 2026
  • Journal of King Saud University Computer and Information Sciences
  • Ping Liu + 2 more

Abstract Bird’s-Eye-View (BEV) semantic segmentation is critical for environmental perception in autonomous driving. Surround-view fisheye camera systems are increasingly adopted to enlarge the perception range and eliminate blind spots. However, severe geometric distortions and frequent ego-motion make accurate spatio-temporal feature alignment across multiple views and timestamps challenging. Such misalignment often leads to semantic inconsistency and notable drops in BEV segmentation accuracy. Moreover, most existing methods overlook these alignment errors and apply semantic supervision only at the final output, resulting in suboptimal intermediate BEV representations. To address these challenges, we propose STSyn-BEV, a Spatio-Temporal Synchronized BEV segmentation framework for surround-view fisheye cameras. It comprises three key components: a Pose-Sync (pose-synchronized) encoder, a semantic consistency supervision module, and a stage-wise supervision decoder with heterogeneous pathways. First, the Pose-Sync encoder explicitly transforms multi-view fisheye features from previous poses and timestamps into a unified BEV space via geometric transformation, substantially improving geometric consistency and temporal alignment. Second, the semantic consistency supervision module applies region-level contrastive learning to aggregated BEV features, enhancing semantic discrimination particularly for long-tailed categories. Third, the deep supervised decoder employs heterogeneous pathways—attention-based for global semantic reasoning and convolution-based for fine-grained structural refinement—guided by stage-wise supervision, enabling improved BEV feature decoding without additional inference cost. Extensive experiments on the FB-SSEM dataset demonstrate that STSyn-BEV surpasses state-of-the-art fisheye image-based BEV segmentation methods, notably achieving a 6.25% mIoU improvement over the strongest fisheye-specific baseline.

  • Research Article
  • 10.1145/3797258
Incorporating Multimodal Commonsense and Heterogeneous User Knowledge for Personalized Implicit Sentiment Analysis in Chinese
  • Mar 9, 2026
  • ACM Transactions on Asian and Low-Resource Language Information Processing
  • Jian Liao + 4 more

Implicit sentiment analysis (ISA) is particularly sensitive to user characteristics due to the absence of explicit sentiment cues. While existing approaches leverage explicit user attributes and social relationships, they neglect the implicit interest preferences embedded in user content and multimodal commonsense knowledge. This article introduces a novel personalized ISA framework that systematically integrates heterogeneous user knowledge with multimodal commonsense to address this limitation. Our core innovation lies in a multi-stage knowledge integration pipeline that first captures rich semantic representations through a large language model, then constructs a comprehensive user profile by fusing multiple views of implicit interests derived from user-multimodal commonsense-content interactions. Specifically, we employ graph neural networks to distill structured knowledge from automatically constructed multimodal commonsense graphs, which enhances semantic understanding. The different perspectives of user interests are then systematically fused to capture implicit preference characteristics. Finally, we introduce an adaptive gated fusion mechanism that dynamically incorporates heterogeneous user knowledge and multimodal commonsense into implicit sentiment semantics, enabling personalized analysis capabilities. Extensive experiments on two public personalized ISA Chinese datasets demonstrate that our method outperforms baselines by at least 2.86% and 3.03%, respectively, validating its effectiveness in comprehensive and personalized modeling of implicit sentiment.

  • Research Article
  • 10.1016/j.neunet.2026.108821
Black-box physical adversarial stripes for hiding from infrared detectors at multiple views.
  • Mar 8, 2026
  • Neural networks : the official journal of the International Neural Network Society
  • Zhaolu Zheng + 3 more

Black-box physical adversarial stripes for hiding from infrared detectors at multiple views.

  • Research Article
  • 10.1148/rg.250096
Triangulation and Additional Views in Mammography.
  • Mar 1, 2026
  • Radiographics : a review publication of the Radiological Society of North America, Inc
  • Rosa M Lorente-Ramos + 3 more

Breast triangulation uses multiple mammographic views to accurately localize findings and facilitate correlation with patient symptoms and US findings, with supplementary views providing additional support.

  • Research Article
  • 10.1021/acs.analchem.5c05675
Learning from All Views: A Multiview Contrastive Framework for Metabolite Annotation.
  • Feb 23, 2026
  • Analytical chemistry
  • Yan Zhou Chen + 1 more

Metabolomics, enabled by high-throughput mass spectrometry, promises to advance our understanding of cellular biochemistry and guide new discoveries in disease mechanisms, drug development, and personalized medicine. However, as the assignment of molecular structures to measured spectra is challenging, annotation rates remain low and hinder potential advancements. We present MultiView Projection (MVP), a novel framework for learning a joint embedding space between molecules and spectra by leveraging multiple data views: molecular graphs, molecular fingerprints, spectra, and consensus spectra. MVP builds on contrastive multiview learning to capture mutual information across views, leading to more robust and generalizable representations for spectral annotation. Unlike prior approaches that consider multiple views via concatenation or as targets of auxiliary tasks, MVP learns from all views jointly, resulting in improved molecular candidate ranking. Notably, MVP supports annotation using either individual spectra or consensus spectra, enabling flexible use of multiple measurements. On the MassSpecGym benchmark, we show that annotation using query consensus spectra significantly outperforms rank aggregation strategies based on constituent spectrum annotation. Using the consensus spectrum view, MVP achieves 36.0 and 14.0% rank@1 when retrieving candidates by mass and formula, respectively. When ranking using individual spectra, MVP demonstrates performance that is superior to or on par with existing methods, achieving 26.4 and 11.1% rank@1 for candidates by mass and formula, respectively. MVP offers a flexible, extensible foundation for learning from multiple molecule/spectra data views.

  • Research Article
  • 10.1021/acs.jcim.5c03158
DVMMHGNN: A Dual View Multi-Modal Heterogeneous Graph Neural Network with Contrastive Learning for Microbe-Informed Drug Repurposing.
  • Feb 23, 2026
  • Journal of chemical information and modeling
  • Huan Li + 7 more

Drug repurposing (DR) offers an efficient and cost-effective strategy for pharmaceutical development by identifying new therapeutic applications for existing drugs. The effectiveness of this approach relies on accurately uncovering potential drug-disease associations; however, capturing the complex biological interactions underlying these associations remains a major challenge. Current computational approaches frequently overlook the critical regulatory role of the microbiota in modulating drug action pathways. Moreover, many methods fail to preserve semantic consistency during multimodal biological data integration and heterogeneous graph augmentation, thereby limiting their representational capacity. To overcome these limitations, we propose DVMMHGNN, a heterogeneous graph contrastive learning framework for microbe informed drug repurposing that jointly integrates structural and meta-path information. First, a multimodal feature fusion module embeds heterogeneous biological entities into a unified latent space to ensure cross-modal feature alignment. Second, a graph-masked autoencoder is employed to capture high-order representations from similarity networks. Finally, DVMMHGNN enhances semantic coherence through contrastive learning at both the structural and meta-path levels, aligning embeddings across multiple views to effectively capture both local and global semantics. Experimental evaluations on the constructed benchmark data set demonstrate that DVMMHGNN consistently outperforms nine state-of-the-art methods in predicting drug-disease associations, achieving superior performance across AUC, AUPR, and F1-score metrics. Ablation studies further validate the contribution of each model component, while case analyses highlight the potential of DVMMHGNN to identify novel drug indications and guide therapeutic strategy development.

  • Research Article
  • 10.3390/buildings16040843
Dual-Stage Graph-Based Association Framework for Cross-View Person Re-Identification in Construction Worker Monitoring
  • Feb 19, 2026
  • Buildings
  • Dohyeong Kim + 2 more

Tracking worker identities across cameras is increasingly important for advanced construction site monitoring, such as safety and productivity monitoring. However, current computer vision-based tracking faces challenges in reliably associating worker identities due to frequent occlusions and extreme viewpoint shifts between aerial and ground cameras, resulting in fragmented trajectories and ID switches. This study proposes a Dual-Stage Graph-based Association framework that integrates worker detections across multiple views using complementary Re-identification models and camera-aware adaptive thresholding. The framework synergistically combines TransReID for viewpoint-invariant global features and BPBReID for occlusion-robust part-based features, producing more discriminative representations. Data association leverages a graph-based clustering approach to combine representation features, camera topology, and temporal cues for robust identity maintenance. The first stage enables cross-view clustering while preventing false matches, and the second stage ensures long-term identity stability through EMA-based gallery management. Experiments on two construction sites demonstrate that the proposed framework achieves an HOTA of 39.85% and an IDF1 of 63.58%, outperforming existing baselines while reducing ID switches by 35.0%. Results on the AG-ReID.v2 benchmark demonstrate strong generalization with 90.82% Rank-1 accuracy in aerial-to-CCTV matching. The approach highlights initial feasibility for cross-view multi-camera tracking in construction with potential for extension to more complex industrial environments.

  • Research Article
  • 10.3390/info17020184
Research on Density-Adaptive Feature Enhancement and Lightweight Spectral Fine-Tuning Algorithm for 3D Point Cloud Analysis
  • Feb 11, 2026
  • Information
  • Wenquan Huang + 4 more

To address fragile feature representation in sparse regions and detail loss in occluded scenes caused by uneven sampling density in 3D point cloud semantic segmentation on the SemanticKITTI dataset, this article proposes an innovative framework that integrates density-adaptive feature enhancement with lightweight spectral fine-tuning, which involves frequency-domain transformations (e.g., Fast Fourier Transform) applied to point cloud features to optimize computational efficiency and enhance robustness in sparse regions, which involves frequency-domain transformations to optimize features efficiently. The method begins by accurately calculating each point’s local neighborhood density using KD tree radius search, subsequently injecting this as an additional feature channel to enable the network’s adaptation to density variations. A density-aware loss function is then employed, dynamically adjusting the classification loss weights—by approximately 40% in low-density areas—to strongly penalize misclassifications and enhance feature robustness from sparse points. Additionally, a multi-view projection fusion mechanism is introduced that projects point clouds onto multiple 2D views, capturing detailed information via mature 2D models, with the primary focus on semantic segmentation tasks using the SemanticKITTI dataset to ensure task specificity. This information is then fused with the original 3D features through backprojection, thereby complementing geometric relationships and texture details to effectively alleviate occlusion artifacts. Experiments on the SemanticKITTI dataset for semantic segmentation show significant performance improvements over the baseline, achieving Precision 0.91, Recall 0.89, and F1-Score 0.90. In low-density regions, the F1-Score improved from 0.73 to 0.80. Ablation studies highlight the contributions of density feature injection, multi-view fusion, and density-aware loss, enhancing F1-Score by 3.8%, 2.5%, and 5.0%, respectively. This framework offers an effective approach for accurate and robust point cloud analysis through optimized density techniques and spectral domain fine-tuning.

  • Research Article
  • 10.1109/tmm.2026.3660180
Screen Detection from Egocentric Image Streams Leveraging Multi-View Vision Language Model.
  • Feb 10, 2026
  • IEEE transactions on multimedia
  • Xueshen Li + 9 more

Accurately monitoring the screen exposure of young children is important for research related to screen use, such as childhood obesity, physical activity, and social interaction. Most existing studies rely upon self-report or manual measures from bulky wearable sensors, thus lacking efficiency and accuracy in capturing quantitative screen exposure data. In this work, we developed a novel screen detection framework that utilizes egocentric images from a wearable sensor, named the screen time tracker (STT), and a vision language model (VLM). In particular, we devised a multi-view VLM that takes multiple views from egocentric image streams and interprets screen exposure dynamically. We validated our approach by using a dataset of children's free-living activities, demonstrating significant improvement over existing methods in conventional vision language models and object detection models. The combination of a vision language model and a lightweight hardware design provides a novel solution in screen detection for children. The proposed framework has great potential to benefit children's behavioral study. The code is available at https://github.com/YGanLab/MV-VLM.

  • Research Article
  • 10.1177/14738716251414381
Wiggum: Interactive visual analytics for examining mix effects
  • Feb 10, 2026
  • Information Visualization
  • Chenguang Xu + 3 more

Wiggum: Interactive visual analytics for examining mix effects

  • Research Article
  • 10.1093/geronb/gbag018
Daily Health Problems, Views of Aging, and the Moderating Role of Awareness of Age-Related Changes.
  • Feb 5, 2026
  • The journals of gerontology. Series B, Psychological sciences and social sciences
  • Maiken Tingvold + 4 more

Views of aging, such as subjective age or subjective accelerated aging, are related to health problems: In longitudinal and daily assessments, experiencing more health issues is associated with more negative views of aging. This study investigates whether the association between health problems and multiple views of aging constructs is moderated by people's experience of age-related gains and losses. We therefore expected people to demonstrate more negative views of aging on days with more health problems. Following previous research on awareness of age-related changes as an important moderator for the impact of age-related experiences to developmental outcomes, we assumed that 1) on days when participants experienced higher awareness of age-related gains the adverse effect of health problems on views of aging would be reduced, 2) the association between health problems and views of aging should be amplified on days when participants experienced higher awareness of age-related losses. A sample of N = 69 participants aged 52-75 years (M age = 62.72, SD = 5.57) reported their subjective age (uni- and multidimensional), subjective accelerated aging, health problems, and awareness of age-related gains and losses for up to 14 days of daily diary assessments. Age, gender, education, and baseline health were included as covariates. Multilevel models showed that perceiving more age-related losses was associated with an exacerbation of the positive association between daily health problems and multidimensional subjective age and subjective accelerated aging. Our findings underscore the importance of perceiving age-related losses in daily life. Perceiving changes as age-related may influence how daily experiences are interpreted and their impact on developmental outcomes.

  • Research Article
  • 10.1109/tmi.2026.3656355
Enhancing Knee Disease Diagnosis via Multi-View Graph Representation with Multi-Task Pre-Training.
  • Feb 3, 2026
  • IEEE transactions on medical imaging
  • Zixu Zhuang + 9 more

Magnetic resonance imaging (MRI) is an indispensable tool for clinical knee examination, which often scans 2D stacked slices from multiple views. Radiologists typically locate lesion regions in one view, and then refer to other views to formulate a comprehensive diagnosis. However, existing computer-aided diagnosis methods fall short of identifying and fusing local regions in multi-view scans, leading to a decline in diagnostic performance and a heavy reliance on extensively annotated data. This paper introduces a novel framework that represents multi-view MRI scans as a knee graph, and conducts diagnosis using the proposed Knee Graph Network (KGNet). Moreover, KGNet is greatly enhanced by multi-task pre-training, which requires KGNet to reconstruct masked knee local patches and segment unmasked ones working alongside corresponding decoders. Experimental evaluations on public and in-house clinical datasets confirm that our framework outperforms existing approaches in diagnosing cartilage defects, anterior cruciate ligament tears, and knee abnormalities. In conclusion, our framework demonstrates the potential of enhancing knee disease diagnosis by representing multi-view MRI scans as a graph and employing multi-task pre-training in the graph network. The code is publicly available at https://github.com/zixuzhuang/KGNet.

  • Research Article
  • 10.1016/j.neunet.2025.108177
Multi-view spectral clustering algorithm based on bipartite graph and multi-feature similarity fusion.
  • Feb 1, 2026
  • Neural networks : the official journal of the International Neural Network Society
  • Shunyong Li + 3 more

Multi-view spectral clustering algorithm based on bipartite graph and multi-feature similarity fusion.

  • Research Article
  • 10.1175/jtech-d-24-0142.1
Novel Multiview Machine Learning Classification of Snowflakes: Harnessing Convolutional Neural Networks and Multiangle Multicamera Instruments
  • Feb 1, 2026
  • Journal of Atmospheric and Oceanic Technology
  • Hein Thant + 1 more

Abstract Classification of snowflakes based on their geometric shape, degree of riming, and melt/dry state can improve understanding, characterization, and quantification of other geometrical, microphysical, and scattering properties of ice particles. For example, classification provides essential ground-truth data for interpreting polarimetric radar signatures of snow while validating and advancing radar-based quantitative precipitation estimation. High-resolution photographs of snowflakes obtained by emerging multicamera instruments are well suited for snowflake classification, which, coupled with recent machine learning techniques based on convolutional neural networks (CNNs), enable methods for accurate and fast automatic classification of snowflakes using images. Given that the appearance of a snowflake generally changes significantly with viewing angle, this work proposes and presents a novel multiview snowflake classification methodology based on the high-resolution photographs of frozen hydrometeors in free-fall from multiple views collected by the multicamera instruments. The approach employs machine/deep learning algorithms leveraging multiangle camera systems and enhanced supervised CNN-based techniques to achieve precise classification of snowflakes based on their geometrical categories and accurate and reliable estimates of specific snowflake properties, such as riming degree and melt/dry state. This represents the first multiview snowflake classification framework that takes full advantage of multiview camera systems. Presented multiview classification results show record accuracies of 98.57%, 98.22%, and 95.83% for geometric classes, riming degree, and melt/dry state, respectively. Significance Statement This work proposes and presents a novel multiview snowflake classification methodology leveraging recent developments in machine learning, multiview classification, and multiangle multicamera instruments for acquiring high-resolution photographs of frozen hydrometeors in free-fall from multiple views. The results for multiview classification show record accuracies for snowflake geometric classification, riming degree estimation, and melt/dry state estimation, respectively, significantly outperforming other classification models in each of the same categories. Automatic multiview machine learning–based winter hydrometeor classification enhances understanding, characterization, and quantification of geometrical, microphysical, and scattering properties of ice and snow hydrometeors. These improvements are essential for quantitative precipitation estimation algorithms and for microphysical parameterizations employed in numerical winter-weather forecast models and regional climate projections, with impacts on economy, safety, and everyday life.

  • Research Article
  • 10.1109/tmi.2025.3609319
MultiASNet: Multimodal Label Noise Robust Framework for the Classification of Aortic Stenosis in Echocardiography.
  • Feb 1, 2026
  • IEEE transactions on medical imaging
  • Victoria Wu + 10 more

Aortic stenosis (AS), a prevalent and serious heart valve disorder, requires early detection but remains difficult to diagnose in routine practice. Although echocardiography with Doppler imaging is the clinical standard, these assessments are typically limited to trained specialists. Point-of-care ultrasound (POCUS) offers an accessible alternative for AS screening but is restricted to basic 2D B-mode imaging, often lacking the analysis Doppler provides. Our project introduces MultiASNet, a multimodal machine learning framework designed to enhance AS screening with POCUS by combining 2D B-mode videos with structured data from echocardiography reports, including Doppler parameters. Using contrastive learning, MultiASNet aligns video features with report features in tabular form from the same patient to improve interpretive quality. To address misalignment where a single report corresponds to multiple video views, some irrelevant to AS diagnosis, we use cross-attention in a transformer-based video and tabular network to assign less importance to irrelevant report data. The model integrates structured data only during training, enabling independent use with B-mode videos during inference for broader accessibility. MultiASNet also incorporates sample selection to counteract label noise from observer variability, yielding improved accuracy on two datasets. We achieved balanced accuracy scores of 93.0% on a private dataset and 83.9% on the public TMED-2 dataset for AS detection. For severity classification, balanced accuracy scores were 80.4% and 59.4% on the private and public datasets, respectively. This model facilitates reliable AS screening in non-specialist settings, bridging the gap left by Doppler data while reducing noise-related errors. Our code is publicly available at github.com/DeepRCL/MultiASNet.

  • Research Article
  • 10.1016/j.forsciint.2025.112804
Using 2D video analysis and model based image matching to measure joint angles for forensic biomechanical analysis.
  • Feb 1, 2026
  • Forensic science international
  • Kevin G Gilmore + 4 more

Using 2D video analysis and model based image matching to measure joint angles for forensic biomechanical analysis.

  • Research Article
  • 10.1002/mp.70322
Demographic-aware deep learning for multi-organ segmentation: Mitigating gender and age biases in CT images.
  • Feb 1, 2026
  • Medical physics
  • Junqiang Ma + 3 more

Deep learning algorithms have shown promising results for automated organ-at-risk (OAR) segmentation in medical imaging. However, their performance is frequently compromised by demographic bias. This limitation becomes pronounced when conventional models fail to account for Complex 3D anatomical variations across diverse groups, as they often overlook critical factors such as age and gender. Consequently, this oversight can lead to inaccurate segmentations, thereby posing significant risks to clinical safety inradiotherapy. To address this challenge, in this work, we develop a demographic-aware deep learning framework for multi-organ segmentation in computed tomography (CT) images. Our approach is designed to explicitly mitigate age- and gender-specific biases by incorporating demographic prompts and adaptive attention mechanisms, enabling the capture of multi-view anatomical features across diversegroups. We propose the Demographic-Aware Network (DA-Net), a novel framework trained on a unified dataset of 489 adult (AMOS2022) and 370 pediatric (Pediatric CT-SEG) CT scans, covering 30 organs and including 355 female scans. To robustly learn group-specific anatomical characteristics, DA-Net integrates the Demographic-Aware Hyper-Convolution (DA-HyperConv) module that dynamically adapts convolutional kernels based on demographic prompts. Additionally, an Adaptive Triplet Attention Block (ATAB) is embedded to further leverage multi-view features and enhance segmentation accuracy. We validate the generalizability and effectiveness of our framework on an external dataset (WORD, 150 adults, 62 females). The framework is evaluated quantitatively using the Dice Similarity Coefficient (DICE) and Normalized Surface Dice (NSD). DA-Net surpasses state-of-the-art (SOTA) methods across both the general group and specific demographic subgroups. In the AMOS2022 dataset (mean age 52.8 16.1 years), DA-Net achieves the highest average DICE of 88.6% and NSD of 76.3% for adults. On the Pediatric CT-SEG (mean age 6.9 4.5 years), it achieves top performance with an average DICE of 75.3% 20.4% and NSD of 54.8% 20.9%. Notably, our proposed framework achieves substantial DICE improvements of 11% to 30% for gender-specific organs, significantly reducing performance disparities. Robustness and generalizability are further supported by consistent results on external validation using the WORD dataset. Compared with the SOTA methods, the performance improvement of our approach is of substantial importance in both the WORD dataset and the PediatricCT-SEG. In this work, we propose DA-Net, a segmentation network that explicitly incorporates age and gender attributes to mitigate performance disparities between pediatric and adult groups while combining multiple views of anatomic features to improve performance. By leveraging demographic information, DA-Net enhances segmentation accuracy, especially for gender-specific organs. The proposed framework highlights the necessity of developing fair and personalized models tailored to clinical applications, providing a foundation for building more equitable artificial intelligence systems in medicalimaging.

  • Research Article
  • 10.1016/j.compbiolchem.2025.108665
Multiview-cooperated graph neural network enables novel multi-omics cancer subtype classification.
  • Feb 1, 2026
  • Computational biology and chemistry
  • Min Li + 5 more

Multiview-cooperated graph neural network enables novel multi-omics cancer subtype classification.

  • Research Article
  • 10.1016/j.neunet.2025.108175
Anchor point segmentation based multi-view clustering.
  • Feb 1, 2026
  • Neural networks : the official journal of the International Neural Network Society
  • Wenhua Dong + 2 more

Anchor point segmentation based multi-view clustering.

  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • .
  • .
  • .
  • 10
  • 1
  • 2
  • 3
  • 4
  • 5

Popular topics

  • Latest Artificial Intelligence papers
  • Latest Nursing papers
  • Latest Psychology Research papers
  • Latest Sociology Research papers
  • Latest Business Research papers
  • Latest Marketing Research papers
  • Latest Social Research papers
  • Latest Education Research papers
  • Latest Accounting Research papers
  • Latest Mental Health papers
  • Latest Economics papers
  • Latest Education Research papers
  • Latest Climate Change Research papers
  • Latest Mathematics Research papers

Most cited papers

  • Most cited Artificial Intelligence papers
  • Most cited Nursing papers
  • Most cited Psychology Research papers
  • Most cited Sociology Research papers
  • Most cited Business Research papers
  • Most cited Marketing Research papers
  • Most cited Social Research papers
  • Most cited Education Research papers
  • Most cited Accounting Research papers
  • Most cited Mental Health papers
  • Most cited Economics papers
  • Most cited Education Research papers
  • Most cited Climate Change Research papers
  • Most cited Mathematics Research papers

Latest papers from journals

  • Scientific Reports latest papers
  • PLOS ONE latest papers
  • Journal of Clinical Oncology latest papers
  • Nature Communications latest papers
  • BMC Geriatrics latest papers
  • Science of The Total Environment latest papers
  • Medical Physics latest papers
  • Cureus latest papers
  • Cancer Research latest papers
  • Chemosphere latest papers
  • International Journal of Advanced Research in Science latest papers
  • Communication and Technology latest papers

Latest papers from institutions

  • Latest research from French National Centre for Scientific Research
  • Latest research from Chinese Academy of Sciences
  • Latest research from Harvard University
  • Latest research from University of Toronto
  • Latest research from University of Michigan
  • Latest research from University College London
  • Latest research from Stanford University
  • Latest research from The University of Tokyo
  • Latest research from Johns Hopkins University
  • Latest research from University of Washington
  • Latest research from University of Oxford
  • Latest research from University of Cambridge

Popular Collections

  • Research on Reduced Inequalities
  • Research on No Poverty
  • Research on Gender Equality
  • Research on Peace Justice & Strong Institutions
  • Research on Affordable & Clean Energy
  • Research on Quality Education
  • Research on Clean Water & Sanitation
  • Research on COVID-19
  • Research on Monkeypox
  • Research on Medical Specialties
  • Research on Climate Justice
Discovery logo
FacebookTwitterLinkedinInstagram

Download the FREE App

  • Play store Link
  • App store Link
  • Scan QR code to download FREE App

    Scan to download FREE App

  • Google PlayApp Store
FacebookTwitterTwitterInstagram
  • Universities & Institutions
  • Publishers
  • R Discovery PrimeNew
  • Ask R Discovery
  • Blog
  • Accessibility
  • Topics
  • Journals
  • Open Access Papers
  • Year-wise Publications
  • Recently published papers
  • Pre prints
  • Questions
  • FAQs
  • Contact us
Lead the way for us

Your insights are needed to transform us into a better research content provider for researchers.

Share your feedback here.

FacebookTwitterLinkedinInstagram
Cactus Communications logo

Copyright 2026 Cactus Communications. All rights reserved.

Privacy PolicyCookies PolicyTerms of UseCareers