• All Solutions All Solutions Caret
    • Editage

      One platform for all researcher needs

    • Paperpal

      AI-powered academic writing assistant

    • R Discovery

      Your #1 AI companion for literature search

    • Mind the Graph

      AI tool for graphics, illustrations, and artwork

    • Journal finder

      AI-powered journal recommender

    Unlock unlimited use of all AI tools with the Editage Plus membership.

    Explore Editage Plus
  • Support All Solutions Support
    discovery@researcher.life
Discovery Logo
Sign In
Paper
Search Paper
Cancel
Pricing Sign In
  • My Feed iconMy Feed
  • Search Papers iconSearch Papers
  • Library iconLibrary
  • Explore iconExplore
  • Ask R Discovery iconAsk R Discovery Star Left icon
  • Chat PDF iconChat PDF Star Left icon
  • Chrome Extension iconChrome Extension
    External link
  • Use on ChatGPT iconUse on ChatGPT
    External link
  • iOS App iconiOS App
    External link
  • Android App iconAndroid App
    External link
  • Contact Us iconContact Us
    External link
Discovery Logo menuClose menu
  • My Feed iconMy Feed
  • Search Papers iconSearch Papers
  • Library iconLibrary
  • Explore iconExplore
  • Ask R Discovery iconAsk R Discovery Star Left icon
  • Chat PDF iconChat PDF Star Left icon
  • Chrome Extension iconChrome Extension
    External link
  • Use on ChatGPT iconUse on ChatGPT
    External link
  • iOS App iconiOS App
    External link
  • Android App iconAndroid App
    External link
  • Contact Us iconContact Us
    External link

Related Topics

  • Intermediate Views
  • Intermediate Views
  • Virtual View
  • Virtual View
  • Virtual Viewpoint
  • Virtual Viewpoint
  • View Interpolation
  • View Interpolation

Articles published on View synthesis

Authors
Select Authors
Journals
Select Journals
Duration
Select Duration
1001 Search results
Sort by
Recency
  • New
  • Research Article
  • Cite Count Icon 1
  • 10.1016/j.isprsjprs.2025.10.022
ARSGaussian: 3D Gaussian Splatting with LiDAR for aerial remote sensing novel view synthesis
  • Jan 1, 2026
  • ISPRS Journal of Photogrammetry and Remote Sensing
  • Yiling Yao + 7 more

ARSGaussian: 3D Gaussian Splatting with LiDAR for aerial remote sensing novel view synthesis

  • New
  • Research Article
  • 10.1016/j.dsp.2025.105573
An occlusion light field sparse Bayesian learning model for view synthesis
  • Jan 1, 2026
  • Digital Signal Processing
  • Weiyan Chen + 2 more

An occlusion light field sparse Bayesian learning model for view synthesis

  • New
  • Research Article
  • 10.1016/j.jvcir.2025.104602
PatchNeRF: Patch-based Neural Radiance Fields for real time view synthesis in wide-scale scenes
  • Jan 1, 2026
  • Journal of Visual Communication and Image Representation
  • Ziyu Hu + 3 more

PatchNeRF: Patch-based Neural Radiance Fields for real time view synthesis in wide-scale scenes

  • New
  • Research Article
  • 10.5194/isprs-archives-xlviii-1-w6-2025-251-2025
Evaluating 3D Gaussian Splatting for Urban Scene Reconstruction
  • Dec 31, 2025
  • The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences
  • Ziyang Yan + 3 more

Abstract. Accurate, detailed and efficient 3D reconstructions of large-scale urban environments are essential for applications such as autonomous driving, urban planning and digital twin construction. Recent advances in 3D Gaussian Splatting (3DGS) have shown remarkable potential in photorealistic novel view synthesis and high-fidelity scene reconstruction, but their applicability to large-scale urban reconstruction remains underexplored and often challenging. In this work, we present a comprehensive evaluation of 3D Gaussian Splatting techniques applied to urban scale 3D reconstruction. We systematically benchmark GS-based methods on diverse urban datasets, analyzing their performance in terms of scalability, geometric accuracy, rendering quality and computational efficiency. The study aims to bridge the gap between emerging 3DGS research and real-world urban reconstruction requirements, offering insights and guidelines for deploying Gaussian Splatting in practical large-scale scenarios.

  • New
  • Research Article
  • 10.3390/jimaging12010016
FluoNeRF: Fluorescent Novel-View Synthesis Under Novel Light Source Colors and Spectra
  • Dec 29, 2025
  • Journal of Imaging
  • Lin Shi + 4 more

Synthesizing photo-realistic images of a scene from arbitrary viewpoints and under arbitrary lighting environments is one of the important research topics in computer vision and graphics. In this paper, we propose a method for synthesizing photo-realistic images of a scene with fluorescent objects from novel viewpoints and under novel lighting colors and spectra. In general, fluorescent materials absorb light with certain wavelengths and then emit light with longer wavelengths than the absorbed ones, in contrast to reflective materials, which preserve wavelengths of light. Therefore, we cannot reproduce the colors of fluorescent objects under arbitrary lighting colors by combining conventional view synthesis techniques with the white balance adjustment of the RGB channels. Accordingly, we extend the novel-view synthesis based on the neural radiance fields by incorporating the superposition principle of light; our proposed method captures a sparse set of images of a scene from varying viewpoints and under varying lighting colors or spectra with active lighting systems such as a color display or a multi-spectral light stage and then synthesizes photo-realistic images of the scene without explicitly modeling its geometric and photometric models. We conducted a number of experiments using real images captured with an LCD and confirmed that our method works better than the existing methods. Moreover, we showed that the extension of our method using more than three primary colors with a light stage enables us to reproduce the colors of fluorescent objects under common light sources.

  • New
  • Research Article
  • 10.31522/p.33.2(70).7
Shaping of Belgrade’s Residential Architecture in the Socialist Period
  • Dec 27, 2025
  • Prostor
  • Ana Rajković + 1 more

The period from the 1970s to the 1980s had a great demand for apartment construction in Belgrade. The study examines how state policies in socialist Yugoslavia shaped architectural design principles, and how these design frameworks influenced everyday social life, with focus on Belgrade area. The research procedure includes the following methods: content analysis of professional and scientific literature in the field of architecture, urbanism and sociology of housing, historical- descriptive analysis of urban policies and housing construction in the period 1970-1980, through a review of available normative acts, publications and documents, interpretation of architectural elements characteristic of the Belgrade School of Housing and theoretical synthesis of philosophical views on space and home, with critical application in the context of specific architectural practices. The aim is to examine the way in which the architecture of residential space in socialist Yugoslavia, and especially in Belgrade, influenced architectural design principles and how that affected the lifestyle of the population. Through this two-tier relationship - policy shaping architecture and architecture shaping society - the paper reveals housing as an active medium of social engineering. This paper provides a qualitative analysis of historical and architectural sources, with an interpretation of theoretical frameworks of architecture, urbanism and social philosophy.

  • New
  • Research Article
  • 10.1109/tpami.2025.3648837
Language Embedded 3D Gaussians for Open-Vocabulary Scene Querying.
  • Dec 26, 2025
  • IEEE transactions on pattern analysis and machine intelligence
  • Miao Wang + 3 more

Open-vocabulary querying in 3D space is challenging but essential for scene understanding tasks such as object localization and segmentation. Language embedded scene representations have made progress by incorporating language features into 3D spaces. However, their efficacy heavily depends on neural networks that are resource-intensive in training and rendering. Although recent 3D Gaussians offer efficient and high-quality novel view synthesis, directly embedding language features in them leads to prohibitive memory usage and decreased performance. In this work, we introduce Language Embedded 3D Gaussians, a novel scene representation for open-vocabulary query tasks. Instead of embedding high-dimensional raw semantic features on 3D Gaussians, we propose a dedicated quantization scheme that drastically alleviates the memory requirement, and a novel embedding procedure that achieves smoother yet high accuracy query, countering the multi-view feature inconsistencies and the high-frequency inductive bias in point-based representations. Our comprehensive experiments show that our representation achieves the best visual quality and language querying accuracy across current language embedded representations, while maintaining real-time rendering frame rates on a single desktop GPU.

  • New
  • Research Article
  • 10.3390/app16010274
Temporally Aware Objective Quality Metric for Immersive Video
  • Dec 26, 2025
  • Applied Sciences
  • Jakub Stankowski + 3 more

State-of-the-art objective quality metrics designed for immersive content typically prioritize spatial distortions; therefore, they can omit temporal artifacts introduced by view synthesis and dynamic scene rendering. Consequently, metrics such as the commonly used peak signal-to-noise ratio for immersive video (IV-PSNR) are “temporally blind”, creating a conceptual gap where temporally stable distortions cannot be distinguished from disruptive temporal flickering. To address this limitation, we propose a temporal extension of the IV-PSNR metric that incorporates motion information into the quality assessment process. The method augments the traditional Y, U, and V color components with a fourth channel representing motion vectors (M), enabling the proposed four-component IV-PSNRYUVM metric to account for dynamic distortions introduced by view rendering. To evaluate the effectiveness of the proposed approach, multiple configurations of motion integration were tested, including metrics based solely on motion consistency, metrics combining motion with texture, and several dense optical flow algorithms with different parameter settings. Extensive experiments performed on immersive video sequences demonstrate that the proposed four-component IV-PSNRYUVM achieves the highest correlation with subjectively perceived video quality. These results confirm that combining texture with motion information provides a benefit, making the proposal a valuable addition for real-world immersive video systems.

  • New
  • Research Article
  • 10.19181/socjour.2025.31.4.8
The Objectives of Social Cognition in the Second Half of the 19th Century: K.D. Kavelin's “Program”
  • Dec 25, 2025
  • Sociological Journal
  • Larissa Kozlova

This article examines the views of Konstantin Dmitrievich Kavelin (1818–1885), a Russian social thinker and public figure, on the objectives of social scientific knowledge in his time. It provides a historical description of his intellectual influences, a phenomenological account of his personal experience, a textual analysis of his late works, and addresses several source studies issues related to the ideological atmosphere of the period under study and his creative work. An attempt is made to study Kavelin's later legacy from historical-sociological, socio-philosophical and epistemological points of view, in the context of the evolution of his ideas aimed at developing a methodology of social cognition, which has not previously received attention in the scientific literature. The article draws on works from the 1870’s and 1880’s, letters and memoirs by Kavelin and his circle, as well as the works of contemporary Russian social scientists. It is shown that K.D. Kavelin's interest in the spiritual and moral life of the Russian people, “our mental formation”, had personal roots and meanings, and was linked to his worldview, character traits, the influence of his intellectual environment and social interests. Kavelin's assessments of the main trends in Russian philosophy and science are presented. While critiquing them, he offers his own vision of the development of social cognition, which can be loosely called a theoretical "program." At its center is attention to the individual, their needs, and the paths of their spiritual, moral and mental development; the individual is the main element of society and the engine of social progress. Kavelin's complex of theoretical and methodological ideas is based on a synthesis of psychological and ethical views; the development of social thought is dependent on the state and prospects of psychology and ethics as priorities for understanding man and society in contemporary Russia. It is revealed that Kavelin's “program” in his later years evolves from the formulation of psychological tasks to formulating ethical and philosophical ones and is ultimately determined by the synthesis of psychological and socio-philosophical ideas. His worldview and proposed methodology of social cognition are based on the principles of sociological nominalism, anthropocentrism, and psychologism. Conclusions are drawn about the contribution of K.D. Kavelin's teachings to the history of Russian thought, about the cultural significance of the “program”, as well as the connection between the thinker's theoretical ideas and pre-revolutionary social-scientific traditions in Russia.

  • New
  • Research Article
  • 10.1111/cgf.70287
NePO: Neural Point Octrees for Large‐Scale Novel View Synthesis
  • Dec 24, 2025
  • Computer Graphics Forum
  • Noah Lewis + 3 more

Abstract Point‐based radiance field rendering produces impressive results for novel‐view synthesis tasks. Established methods work with object‐centric datasets or room‐sized scenes, as computational resources and model capabilities are limited. To overcome this limitation, we introduce neural point octrees (NePOs) to radiance field rendering, which enables optimisation and rendering of large‐scale datasets at varying detail levels, including different acquisition modalities, such as camera drones and LiDAR vehicles. Our method organises input point clouds into an octree from the bottom up, enabling level of detail (LOD) selection during rendering. Appearance descriptors for each point are optimised using the RGB captures, enabling our system to self‐refine and address real‐world challenges such as capture coverage discrepancies and SLAM pose drift. The refinement is achieved by adaptively densifying octree nodes during training and optimising camera poses via gradient descent. Overall, our approach efficiently optimises scenes with thousands of images and renders scenes containing hundreds of millions of points in real time.

  • Research Article
  • 10.1038/s41598-025-27784-2
Intraoperative 3D reconstruction from sparse arbitrarily posed real X-rays
  • Dec 13, 2025
  • Scientific Reports
  • Sascha Jecklin + 6 more

Spine surgery is a high-risk intervention demanding precise execution, often supported by image-based navigation systems. Recently, supervised learning approaches have gained attention for reconstructing 3D spinal anatomy from sparse fluoroscopic data, significantly reducing reliance on radiation-intensive 3D imaging systems. However, these methods typically require large amounts of annotated training data and may struggle to generalize across varying patient anatomies or imaging conditions. Instance-learning approaches like Gaussian splatting could offer an alternative by avoiding extensive annotation requirements. While Gaussian splatting has shown promise for novel view synthesis, its application to sparse, arbitrarily posed real intraoperative X-rays has remained largely unexplored. This work addresses this limitation by extending the R^{2}-Gaussian splatting framework to reconstruct anatomically consistent 3D volumes under these challenging conditions. We introduce an anatomy-guided radiographic standardization step using style transfer, improving visual consistency across views, and enhancing reconstruction quality. Notably, our framework requires no pretraining, making it inherently adaptable to new patients and anatomies. We evaluated our approach using an ex-vivo dataset. Expert surgical evaluation confirmed the clinical utility of the 3D reconstructions for navigation, especially when using 20–30 views, and highlighted the standardization’s benefit for anatomical clarity. Benchmarking via quantitative 2D metrics (PSNR/SSIM) confirmed performance trade-offs compared to idealized settings, but also validated the improvement gained from standardization over raw inputs. This work demonstrates the feasibility of instance-based volumetric reconstruction from arbitrary sparse-view X-rays, advancing intraoperative 3D imaging for surgical navigation. Code and data to reproduce our results is made available at https://github.com/MrMonk3y/IXGS.

  • Research Article
  • 10.1109/tvcg.2025.3642516
VolSegGS: Segmentation and Tracking in Dynamic Volumetric Scenes via Deformable 3D Gaussians.
  • Dec 10, 2025
  • IEEE transactions on visualization and computer graphics
  • Siyuan Yao + 1 more

Visualization of large-scale time-dependent simulation data is crucial for domain scientists to analyze complex phenomena, but it demands significant I/O bandwidth, storage, and computational resources. To enable effective visualization on local, low-end machines, recent advances in view synthesis techniques, such as neural radiance fields, utilize neural networks to generate novel visualizations for volumetric scenes. However, these methods focus on reconstruction quality rather than facilitating interactive visualization exploration, such as feature extraction and tracking. We introduce VolSegGS, a novel Gaussian splatting framework that supports interactive segmentation and tracking in dynamic volumetric scenes for exploratory visualization and analysis. Our approach utilizes deformable 3D Gaussians to represent a dynamic volumetric scene, allowing for real-time novel view synthesis. For accurate segmentation, we leverage the view-independent colors of Gaussians for coarse-level segmentation and refine the results with an affinity field network for fine-level segmentation. Additionally, by embedding segmentation results within the Gaussians, we ensure that their deformation enables continuous tracking of segmented regions over time. We demonstrate the effectiveness of VolSegGS with several time-varying datasets and compare our solutions against state-of-the-art methods. With the ability to interact with a dynamic scene in real time and provide flexible segmentation and tracking capabilities, VolSegGS offers a powerful solution under low computational demands. This framework unlocks exciting new possibilities for time-varying volumetric data analysis and visualization.

  • Research Article
  • 10.1109/tpami.2025.3594705
Geo-NI: Geometry-Aware Neural Interpolation for Light Field Rendering.
  • Dec 1, 2025
  • IEEE transactions on pattern analysis and machine intelligence
  • Gaochang Wu + 4 more

We present a novel Geometry-aware Neural Interpolation (Geo-NI) framework for light field rendering. Previous learning-based approaches either perform direct interpolation via neural networks, which we dubbed Neural Interpolation (NI), or explore scene geometry for novel view synthesis, also known as Depth Image-Based Rendering (DIBR). Both kinds of approaches have their own strengths and weaknesses in addressing non-Lambert effect and large disparity problems. In this paper, we incorporate the ideas behind these two kinds of approaches by launching the NI within a specific DIBR pipeline. Specifically, a DIBR network in the proposed Geo-NI serves to construct a novel reconstruction cost volume for neural interpolated light fields sheared by different depth hypotheses. The reconstruction cost can be interpreted as an indicator reflecting the reconstruction quality under a certain depth hypothesis, and is further applied to guide the rendering of the final high angular resolution light field. To implement the Geo-NI framework more practically, we further propose an efficient modeling strategy to encode high-dimensional cost volumes using a lower-dimension network. By combining the superiorities of NI and DIBR, the proposed Geo-NI is able to render views with large disparities with the help of scene geometry while also reconstructing the non-Lambertian effect when depth is prone to be ambiguous. Extensive experiments on various datasets demonstrate the superior performance of the proposed geometry-aware light field rendering framework.

  • Research Article
  • 10.1109/tpami.2025.3600473
NeuMesh++: Toward Versatile and Efficient Volumetric Editing With Disentangled Neural Mesh-Based Implicit Field.
  • Dec 1, 2025
  • IEEE transactions on pattern analysis and machine intelligence
  • Chong Bao + 7 more

Recently neural implicit rendering techniques have evolved rapidly and demonstrated significant advantages in novel view synthesis and 3D scene reconstruction. However, existing neural rendering methods for editing purposes offer limited functionalities, e.g., rigid transformation and category-specific editing. In this paper, we present a novel mesh-based representation by encoding the neural radiance field with disentangled geometry, texture, and semantic codes on mesh vertices, which empowers a set of efficient and comprehensive editing functionalities, including mesh-guided geometry editing, designated texture editing with texture swapping, filling and painting operations, and semantic-guided editing. To this end, we develop several techniques including a novel local space parameterization to enhance rendering quality and training stability, a learnable modification color on vertex to improve the fidelity of texture editing, a spatial-aware optimization strategy to realize precise texture editing, and a semantic-aided region selection to ease the laborious annotation of implicit field editing. Extensive experiments and editing examples on both real and synthetic datasets demonstrate the superiority of our method on representation quality and editing ability.

  • Research Article
  • Cite Count Icon 2
  • 10.1109/tpami.2025.3598711
Explicit Correspondence Matching for Generalizable Neural Radiance Fields.
  • Dec 1, 2025
  • IEEE transactions on pattern analysis and machine intelligence
  • Yuedong Chen + 5 more

We present a new generalizable NeRF method that is able to directly generalize to new unseen scenarios and perform novel view synthesis with as few as two source views. The key to our approach lies in the explicitly modeled correspondence matching information, so as to provide the geometry prior to the prediction of NeRF color and density for volume rendering. The explicit correspondence matching is quantified with the cosine similarity between image features sampled at the 2D projections of a 3D point on different views, which is able to provide reliable cues about the surface geometry. Unlike previous methods where image features are extracted independently for each view, we consider modeling the cross-view interactions via Transformer cross-attention, which greatly improves the feature matching quality. Our method achieves state-of-the-art results on different evaluation settings, with the experiments showing a strong correlation between our learned cosine feature similarity and volume density, demonstrating the effectiveness and superiority of our proposed method.

  • Research Article
  • Cite Count Icon 1
  • 10.1016/j.cag.2025.104447
CeRF: Convolutional neural radiance derivative fields for new view synthesis
  • Dec 1, 2025
  • Computers & Graphics
  • Wenjie Liu + 5 more

CeRF: Convolutional neural radiance derivative fields for new view synthesis

  • Research Article
  • 10.1145/3763305
Differentiable Light Transport with Gaussian Surfels via Adapted Radiosity for Efficient Relighting and Geometry Reconstruction
  • Dec 1, 2025
  • ACM Transactions on Graphics
  • Kaiwen Jiang + 5 more

Radiance fields have gained tremendous success with applications ranging from novel view synthesis to geometry reconstruction, especially with the advent of Gaussian splatting. However, they sacrifice modeling of material reflective properties and lighting conditions, leading to significant geometric ambiguities and the inability to easily perform relighting. One way to address these limitations is to incorporate physically-based rendering, but it has been prohibitively expensive to include full global illumination within the inner loop of the optimization. Therefore, previous works adopt simplifications that make the whole optimization with global illumination effects efficient but less accurate. In this work, we adopt Gaussian surfels as the primitives and build an efficient framework for differentiable light transport, inspired from the classic radiosity theory. The whole framework operates in the coefficient space of spherical harmonics, enabling both diffuse and specular materials. We extend the classic radiosity into non-binary visibility and semi-opaque primitives, propose novel solvers to efficiently solve the light transport, and derive the backward pass for gradient optimizations, which is more efficient than auto-differentiation. During inference, we achieve view-independent rendering where light transport need not be recomputed under viewpoint changes, enabling hundreds of FPS for global illumination effects, including view-dependent reflections using a spherical harmonics representation. Through extensive qualitative and quantitative experiments, we demonstrate superior geometry reconstruction, view synthesis and relighting than previous inverse rendering baselines, or data-driven baselines given relatively sparse datasets with known or unknown lighting conditions.

  • Research Article
  • 10.1016/j.neucom.2025.131657
Conditional plane-based multi-scene representation for novel view synthesis
  • Dec 1, 2025
  • Neurocomputing
  • Uchitha Rajapaksha + 4 more

Conditional plane-based multi-scene representation for novel view synthesis

  • Research Article
  • 10.1145/3763276
Frame-Free Representation of Polarized Light for Resolving Stokes Vector Singularities
  • Dec 1, 2025
  • ACM Transactions on Graphics
  • Shinyoung Yi + 4 more

Stokes parameters are the standard representation of polarized light intensity in Mueller calculus and are widely used in polarization-aware computer graphics. However, their reliance on local frames-aligned with ray propagation directions-introduces a fundamental limitation: numerical discontinuities in Stokes vectors despite physically continuous fields of polarized light. This issue originates from the Hairy Ball Theorem, which guarantees unavoidable singularities in any frame-dependent function defined over spherical directional domains. In this paper, we overcome this long-standing challenge by introducing the first frame-free representation of Stokes vectors. Our key idea is to reinterpret a Stokes vector as a Dirac delta function over the directional domain and project it onto spin-2 spherical harmonics, retaining only the lowest-frequency coefficients. This compact representation supports coordinate-invariant interpolation and distance computation between Stokes vectors across varying ray directions-without relying on local frames. We demonstrate the advantages of our approach in two representative applications: spherical resampling of polarized environment maps (e.g., between cube map and equirectangular formats), and view synthesis from polarized radiance fields. In both cases, conventional frame-dependent methods produce singularity artifacts. In contrast, our frame-free representation eliminates these artifacts, improves numerical robustness, and simplifies implementation by decoupling polarization encoding from local frames.

  • Research Article
  • 10.1051/0004-6361/202556730
Asteroid-GS: 3D Gaussian splatting for fast surface reconstruction of asteroids
  • Dec 1, 2025
  • Astronomy & Astrophysics
  • Xiaojie Zhang + 3 more

Context. Asteroid surface reconstruction is essential for deep space exploration missions, as it provides critical information about surface morphology that supports spacecraft navigation and sample acquisition. Traditional methods, such as stereo-photogrammetry (SPG) and stereo-photoclinometry (SPC), have been widely applied in asteroid missions, which often rely on large amounts of data or considerable manual intervention to derive reliable models. Meanwhile, intelligent methods based on the neural radiance field (NeRF) suffer from slow processing speeds, often requiring several hours or even days to complete surface reconstruction. Recent 3D Gaussian splatting (3DGS) shows promise in fast surface reconstruction but faces some challenges in asteroid scenarios, limiting its direct application. Aims. This paper presents Asteroid-GS, a fast and intelligent method for reconstructing asteroid surface models based on 3DGS. It is intended to complement current methodologies, enabling asteroid reconstruction with a limited number of images and a small amount of processing time while achieving an accuracy comparable that of to existing algorithms. Methods. Our method incorporates an adaptive Gaussian pruning strategy to remove noise from asteroids in deep space environments. The shallow multilayer perceptrons integrated with asteroid illumination are employed to improve the reconstruction in both well-lit and shadowed regions. Additionally, we employ geometric regularization techniques to enhance surface detail preservation and construct the Gaussian opacity field to enable accurate surface mesh extraction. Results. Experimental results on asteroids Itokawa and Ryugu demonstrate that our method outperforms state-of-the-art 3DGS-based methods in terms of 3D model accuracy and novel view synthesis. It maintains geometric consistency with traditional models, achieving better results than SPG given the same input images, while notably reducing processing time and manual intervention compared to SPC. Asteroid-GS completes reconstruction within one hour, requiring significantly less time than NeRF-based methods. Our work provides a supplementary solution for asteroid surface reconstruction, potentially improving the efficiency of future exploration missions.

  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • .
  • .
  • .
  • 10
  • 1
  • 2
  • 3
  • 4
  • 5

Popular topics

  • Latest Artificial Intelligence papers
  • Latest Nursing papers
  • Latest Psychology Research papers
  • Latest Sociology Research papers
  • Latest Business Research papers
  • Latest Marketing Research papers
  • Latest Social Research papers
  • Latest Education Research papers
  • Latest Accounting Research papers
  • Latest Mental Health papers
  • Latest Economics papers
  • Latest Education Research papers
  • Latest Climate Change Research papers
  • Latest Mathematics Research papers

Most cited papers

  • Most cited Artificial Intelligence papers
  • Most cited Nursing papers
  • Most cited Psychology Research papers
  • Most cited Sociology Research papers
  • Most cited Business Research papers
  • Most cited Marketing Research papers
  • Most cited Social Research papers
  • Most cited Education Research papers
  • Most cited Accounting Research papers
  • Most cited Mental Health papers
  • Most cited Economics papers
  • Most cited Education Research papers
  • Most cited Climate Change Research papers
  • Most cited Mathematics Research papers

Latest papers from journals

  • Scientific Reports latest papers
  • PLOS ONE latest papers
  • Journal of Clinical Oncology latest papers
  • Nature Communications latest papers
  • BMC Geriatrics latest papers
  • Science of The Total Environment latest papers
  • Medical Physics latest papers
  • Cureus latest papers
  • Cancer Research latest papers
  • Chemosphere latest papers
  • International Journal of Advanced Research in Science latest papers
  • Communication and Technology latest papers

Latest papers from institutions

  • Latest research from French National Centre for Scientific Research
  • Latest research from Chinese Academy of Sciences
  • Latest research from Harvard University
  • Latest research from University of Toronto
  • Latest research from University of Michigan
  • Latest research from University College London
  • Latest research from Stanford University
  • Latest research from The University of Tokyo
  • Latest research from Johns Hopkins University
  • Latest research from University of Washington
  • Latest research from University of Oxford
  • Latest research from University of Cambridge

Popular Collections

  • Research on Reduced Inequalities
  • Research on No Poverty
  • Research on Gender Equality
  • Research on Peace Justice & Strong Institutions
  • Research on Affordable & Clean Energy
  • Research on Quality Education
  • Research on Clean Water & Sanitation
  • Research on COVID-19
  • Research on Monkeypox
  • Research on Medical Specialties
  • Research on Climate Justice
Discovery logo
FacebookTwitterLinkedinInstagram

Download the FREE App

  • Play store Link
  • App store Link
  • Scan QR code to download FREE App

    Scan to download FREE App

  • Google PlayApp Store
FacebookTwitterTwitterInstagram
  • Universities & Institutions
  • Publishers
  • R Discovery PrimeNew
  • Ask R Discovery
  • Blog
  • Accessibility
  • Topics
  • Journals
  • Open Access Papers
  • Year-wise Publications
  • Recently published papers
  • Pre prints
  • Questions
  • FAQs
  • Contact us
Lead the way for us

Your insights are needed to transform us into a better research content provider for researchers.

Share your feedback here.

FacebookTwitterLinkedinInstagram
Cactus Communications logo

Copyright 2026 Cactus Communications. All rights reserved.

Privacy PolicyCookies PolicyTerms of UseCareers