2DGH: 2D Gaussian-Hermite Splatting for High-quality Rendering and Better Geometry Features.

  • Abstract
  • Literature Map
  • Similar Papers
Abstract
Translate article icon Translate Article Star icon
Take notes icon Take Notes

2D Gaussian Splatting has recently emerged as a significant method in 3D reconstruction, enabling novel view synthesis and geometry reconstruction simultaneously. While the well-known Gaussian kernel is broadly used, its lack of anisotropy and deformation ability leads to dim and vague edges at object silhouettes, limiting the reconstruction quality of current Gaussian splatting methods. To enhance the representation power, we draw inspiration from quantum physics and propose to use the Gaussian-Hermite kernel as the new primitive in Gaussian splatting. The new kernel takes a unified mathematical form and extends the Gaussian function, which serves as the zero-rank special case in the updated general formulation. Our experiments demonstrate that the proposed Gaussian-Hermite kernel achieves improved performance over traditional Gaussian Splatting kernels on both geometry reconstruction and novel-view synthesis tasks. Specifically, on the DTU dataset, our method yields more accurate geometry reconstruction, while on datasets such as MipNeRF360 and our customized Detail dataset, it achieves better results in novel-view synthesis. These results highlight the potential of the Gaussian-Hermite kernel for high-quality 3D reconstruction and rendering.

Similar Papers
  • Research Article
  • 10.1038/s41598-025-03200-7
Single view generalizable 3D reconstruction based on 3D Gaussian splatting
  • May 27, 2025
  • Scientific Reports
  • Kun Fang + 4 more

3D Gaussian Splatting (3DGS) has become a significant research focus in recent years, particularly for 3D reconstruction and novel view synthesis under non-ideal conditions. Among these studies, tasks involving sparse input data have been further classified, with the most challenging scenario being the reconstruction of 3D structures and synthesis of novel views from a single input image. In this paper, we introduce SVG3D, a method for generalizable 3D reconstruction from a single view, based on 3DGS. We use a state-of-the-art monocular depth estimator to obtain depth maps of the scenes. These depth maps, along with the original scene images, are fed into a U-Net network, which predicts the parameters for 3D Gaussian ellipsoids corresponding to each pixel. Unlike previous work, we do not stratify the predicted 3D Gaussian ellipsoids but allow the network to learn the positioning autonomously. This design enables accurate geometric representation when rendered from the target camera view, significantly enhancing novel view synthesis accuracy. We trained our model on the RealEstate10K dataset and performed both quantitative and qualitative analysis on the test set. We compared single-view novel view 3D reconstruction methods across different 3D representation techniques, including methods based on Multi-Plane Image (MPI) representation, hybrid MPI and Neural Radiance Fields representation, and the current state-of-the-art methods using 3DGS representation for single-view novel view reconstruction. These comparisons substantiated the effectiveness and accuracy of our method. Additionally, to assess the generalizability of our network, we validated it across the NYU and KITTI datasets, and the results confirmed its robust cross-dataset generalization capability.

  • Research Article
  • 10.3390/rs17091520
Comparative Analysis of Novel View Synthesis and Photogrammetry for 3D Forest Stand Reconstruction and Extraction of Individual Tree Parameters
  • Apr 25, 2025
  • Remote Sensing
  • Guoji Tian + 2 more

The accurate and efficient 3D reconstruction of trees is beneficial for urban forest resource assessment and management. Close-range photogrammetry (CRP) is widely used in the 3D model reconstruction of forest scenes. However, in practical forestry applications, challenges such as low reconstruction efficiency and poor reconstruction quality persist. Recently, novel view synthesis (NVS) technology, such as neural radiance fields (NeRF) and 3D Gaussian splatting (3DGS), has shown great potential in the 3D reconstruction of plants using some limited number of images. However, existing research typically focuses on small plants in orchards or individual trees. It remains uncertain whether this technology can be effectively applied in larger, more complex stands or forest scenes. In this study, we collected sequential images of urban forest plots with varying levels of complexity using imaging devices with different resolutions (cameras on smartphones and UAV). These plots included one with sparse, leafless trees and another with dense foliage and more occlusions. We then performed dense reconstruction of forest stands using NeRF and 3DGS methods. The resulting point cloud models were compared with those obtained through photogrammetric reconstruction and laser scanning methods. The results show that compared to photogrammetric method, NVS methods have a significant advantage in reconstruction efficiency. The photogrammetric method is suitable for relatively simple forest stands, as it is less adaptable to complex ones. This results in tree point cloud models with issues such as excessive canopy noise and wrongfully reconstructed trees with duplicated trunks and canopies. In contrast, NeRF is better adapted to more complex forest stands, yielding tree point clouds of the highest quality that offer more detailed trunk and canopy information. However, it can lead to reconstruction errors in the ground area when the input views are limited. The 3DGS method has a relatively poor capability to generate dense point clouds, resulting in models with low point density, particularly with sparse points in the trunk areas, which affects the accuracy of the diameter at breast height (DBH) estimation. Tree height and crown diameter information can be extracted from the point clouds reconstructed by all three methods, with NeRF achieving the highest accuracy in tree height. However, the accuracy of DBH extracted from photogrammetric point clouds is still higher than that from NeRF point clouds. Meanwhile, compared to ground-level smartphone images, tree parameters extracted from reconstruction results of higher-resolution and varied perspectives of drone images are more accurate. These findings confirm that NVS methods have significant application potential for 3D reconstruction of urban forests.

  • Research Article
  • Cite Count Icon 2
  • 10.54097/hset.v39i.6732
Research on 3D Object Reconstruction Method based on Deep Learning
  • Apr 1, 2023
  • Highlights in Science, Engineering and Technology
  • Xiaoyang Liu

3D reconstruction is a classic task in the field of computer graphics. More and more researchers try to replicate the success of deep learning in 2D image processing tasks to 3D reconstruction tasks, so 3D reconstruction related research based on deep learning has gradually become a research hotspot. Compared with the traditional 3D reconstruction methods that require precision acquisition equipment and strict calibration of image information, the 3D reconstruction method based on deep learning completes the matching of 2D images to 3D models through deep neural networks, and can reconstruct 3D models of various categories of objects from RGB images obtained by ordinary acquisition equipment in a large number and quickly. This paper introduces the state of the art of 3D voxel reconstruction, 3D points cloud reconstruction and 3D mesh reconstruction, respectively. According to the different representation methods of 3D objects, the 3D object reconstruction methods based on deep learning are classified and reviewed, the characteristics and shortcomings of existing methods are analyzed, and three important research trends are summarized.

  • Research Article
  • Cite Count Icon 14
  • 10.1049/iet-ipr.2019.0854
Improving 3D reconstruction accuracy in wavelet transform profilometry by reducing shadow effects
  • Feb 1, 2020
  • IET Image Processing
  • Claudia‐Victoria López‐Torres + 4 more

Wavelet transform profilometry is a three‐dimensional (3D) reconstruction method based on the structured light technique of fringe pattern projection, widely used because it is a non‐invasive, high‐performance 3D reconstruction method. The presence of shadows created by the object in the image capture process is an obstacle in obtaining accurate 3D reconstructions, as they add noise to the phase data, leading to artefacts in object reconstruction, even when using robust phase‐unwrapping algorithms. Since shadows present diverse intensities and shapes, detecting and eliminating their effects are challenging tasks. This work presents a novel method to detect shadow regions and reduce their effects in 3D reconstruction. The proposed method uses coloured fringe patterns to detect the shadows and mathematical morphology to condition the outlines of the shadow regions. The shadow outline information is used to interpolate the background‐plane fringe pattern onto the captured scene, where the shadows are detected. The mean squared error (MSE) of the reconstructed objects is reduced to 25% of the MSE without shadow removal, on an average, when using the Bioucas phase‐unwrapping method. When using the Ghiglia phase‐unwrapping method, the MSE reduction is to 8.3%, on an average, of the MSE in the shadow case.

  • Book Chapter
  • Cite Count Icon 1
  • 10.1007/978-3-642-77463-8_23
Optimized Scan Modes and Reconstruction Techniques for Three-Dimensional Display of Bone Structures
  • Jan 1, 1992
  • T. Fleiter + 4 more

The high quality three-dimensional (3D) presentation of computed tomography (CT) data has become a common technique since fast imaging systems and 3D software became widely available at the beginning of the 1980s. Unfortunately, the quality of the 3D images produced by standard 3D packages depends not only on several basic scanning parameters like voltage, amperage, slice, and imcrernent, but also on the primary reconstruction algorithm, image separation, and 3D reconstruction and shading method. Therefore, no standard 3D scanning method providing a high quality 3D display combined with an acceptable acquisition and reconstruction time has been established. New volume scanning methods such as the spiral CT offer additional methods to shorten the examination time. There are some problems for 3D imaging in the use of spiral CT. The aim of our study was to compare different scanning — and 3D — reconstruction methods in order to develop standard scanning sequences for routine 3D CT imaging.

  • Research Article
  • 10.1109/tip.2025.3574929
LoopSparseGS: Loop-Based Sparse-View Friendly Gaussian Splatting.
  • Jan 1, 2025
  • IEEE transactions on image processing : a publication of the IEEE Signal Processing Society
  • Zhenyu Bao + 5 more

Despite the photorealistic novel view synthesis (NVS) performance achieved by the original 3D Gaussian splatting (3DGS), its rendering quality significantly degrades with sparse input views. This performance drop is mainly caused by the limited number of initial points generated from the sparse input, lacking reliable geometric supervision during the training process, and inadequate regularization of the oversized Gaussian ellipsoids. To handle these issues, we propose the LoopSparseGS, a loop-based 3DGS framework for the sparse novel view synthesis task. In specific, we propose a loop-based Progressive Gaussian Initialization (PGI) strategy that could iteratively densify the initialized point cloud using the rendered pseudo images during the training process. Then, the sparse and reliable depth from the Structure from Motion, and the window-based dense monocular depth are leveraged to provide precise geometric supervision via the proposed Depth-alignment Regularization (DAR). Additionally, we introduce a novel Sparse-friendly Sampling (SFS) strategy to handle oversized Gaussian ellipsoids leading to large pixel errors. Comprehensive experiments on four datasets demonstrate that LoopSparseGS outperforms existing state-of-the-art methods for sparse-input novel view synthesis, across indoor, outdoor, and object-level scenes with various image resolutions. Code is available at: https://github.com/pcl3dv/LoopSparseGS.

  • Research Article
  • Cite Count Icon 1
  • 10.5194/isprs-archives-xlviii-2-w7-2024-189-2024
CDGS: Confidence-Aware Depth Regularization for 3D Gaussian Splatting
  • Dec 13, 2024
  • The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences
  • Qilin Zhang + 3 more

Abstract. 3D Gaussian Splatting (3DGS) has shown significant advantages in novel view synthesis (NVS), particularly in achieving high rendering speeds and high-quality results. However, its geometric accuracy in 3D reconstruction remains limited due to the lack of explicit geometric constraints during optimization. This paper introduces CDGS, a confidence-aware depth regularization approach developed to enhance 3DGS. We leverage multi-cue confidence map of monocular depth estimation and sparse Structure-from-Motion (SfM) depth to adaptively adjusts depth supervision during the optimization process. Our method demonstrates improved geometric detail preservation in early training stages and achieves competitive performance in both NVS quality and geometric accuracy. Experiments on the public available Tanks and Temples benchmark dataset show that our method achieves more stable convergence behavior and more accurate geometric reconstruction results, with improvements of up to 2.31 dB in PSNR for NVS and consistently lower geometric errors in M3C2 distance metrics. Notably, our method reaches comparable F-scores to the original 3DGS with only 50% of the training iterations. We expect this work will facilitate the development of efficient and accurate 3D reconstruction systems for real-world applications such as digital twin creation, heritage preservation, or forestry applications.

  • Research Article
  • Cite Count Icon 10
  • 10.1609/aaai.v38i7.28626
Sparse3D: Distilling Multiview-Consistent Diffusion for Object Reconstruction from Sparse Views
  • Mar 24, 2024
  • Proceedings of the AAAI Conference on Artificial Intelligence
  • Zixin Zou + 5 more

Reconstructing 3D objects from extremely sparse views is a long-standing and challenging problem. While recent techniques employ image diffusion models for generating plausible images at novel viewpoints or for distilling pre-trained diffusion priors into 3D representations using score distillation sampling (SDS), these methods often struggle to simultaneously achieve high-quality, consistent, and detailed results for both novel-view synthesis (NVS) and geometry. In this work, we present Sparse3D, a novel 3D reconstruction method tailored for sparse view inputs. Our approach distills robust priors from a multiview-consistent diffusion model to refine a neural radiance field. Specifically, we employ a controller that harnesses epipolar features from input views, guiding a pre-trained diffusion model, such as Stable Diffusion, to produce novel-view images that maintain 3D consistency with the input. By tapping into 2D priors from powerful image diffusion models, our integrated model consistently delivers high-quality results, even when faced with open-world objects. To address the blurriness introduced by conventional SDS, we introduce the category-score distillation sampling (C-SDS) to enhance detail. We conduct experiments on CO3DV2 which is a multi-view dataset of real-world objects. Both quantitative and qualitative evaluations demonstrate that our approach outperforms previous state-of-the-art works on the metrics regarding NVS and geometry reconstruction.

  • PDF Download Icon
  • Research Article
  • Cite Count Icon 3
  • 10.1590/1678-9199-jvatitd-1449-18
A lung image reconstruction from computed radiography images as a tool to tuberculosis treatment control.
  • Jan 1, 2019
  • Journal of Venomous Animals and Toxins including Tropical Diseases
  • Marcela De Oliveira + 6 more

Background:Background: Tuberculosis (TB) is an infectious lung disease with high worldwide incidence that severely compromises the quality of life in affected individuals. Clinical tests are currently employed to monitor pulmonary status and treatment progression. The present study aimed to apply a three-dimensional (3D) reconstruction method based on chest radiography to quantify lung-involvement volume of TB acute-phase patients before and after treatment. In addition, these results were compared with indices from conventional clinical exams to show the coincidence level.Methods:A 3D lung reconstruction method using patient chest radiography was applied to quantify lung-involvement volume using retrospective examinations of 50 patients who were diagnosed with pulmonary TB and treated with two different drugs schemes. Twenty-five patients were treated with Scheme I (rifampicin, isoniazid, and pyrazinamide), whereas twenty-five patients were treated with Scheme II (rifampicin, isoniazid, pyrazinamide, and ethambutol). Acute-phase reaction: Serum exams included C-reactive protein levels, erythrocyte sedimentation rate, and albumin levels. Pulmonary function was tested posttreatment.Results:We found strong agreement between lung involvement and serum indices pre- and posttreatment. Comparison of the functional severity degree with lung involvement based on 3D image quantification for both treatment schemes found a high correlation.Conclusions: The present 3D reconstruction method produced a satisfactory agreement with the acute-phase reaction, most notably a higher significance level with the C-reactive protein. We also found a quite reasonable coincidence between the 3D reconstruction method and the degree of functional lung impairment posttreatment. The performance of the quantification method was satisfactory when comparing the two treatment schemes. Thus, the 3D reconstruction quantification method may be useful tools for monitoring TB treatment. The association with serum indices are not only inexpensive and sensitive but also may be incorporated into the assessment of patients during TB treatment.

  • Research Article
  • Cite Count Icon 4
  • 10.1016/j.cag.2022.07.019
Dynamic scene novel view synthesis via deferred spatio-temporal consistency
  • Jul 25, 2022
  • Computers & Graphics
  • Beatrix-Emőke Fülöp-Balogh + 4 more

Dynamic scene novel view synthesis via deferred spatio-temporal consistency

  • Conference Article
  • Cite Count Icon 40
  • 10.1109/wacv56688.2023.00432
Control-NeRF: Editable Feature Volumes for Scene Rendering and Manipulation
  • Jan 1, 2023
  • Verica Lazova + 4 more

We present Control-NeRF <sup xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">1</sup> , a method for performing flexible, 3D-aware image content manipulation while enabling high-quality novel view synthesis, from a set of posed input images. NeRF-based approaches [23] are effective for novel view synthesis, however such models memorize the radiance for every point in a scene within a neural network. Since these models are scene-specific and lack a 3D scene representation, classical editing such as shape manipulation, or combining scenes is not possible. While there are some recent hybrid approaches that combine NeRF with external scene representations such as sparse voxels, planes, hash tables, etc. [16], [5], [24], [9], they focus mostly on efficiency and don't explore the scene editing and manipulation capabilities of hybrid approaches. With the aim of exploring controllable scene representations for novel view synthesis, our model couples learnt scene-specific 3D feature volumes with a general NeRF rendering network. We can generalize to novel scenes by optimizing only the scene-specific 3D feature volume, while keeping the parameters of the rendering network fixed. Since the feature volumes are independent of the rendering model, we can manipulate and combine scenes by editing their corresponding feature volumes. The edited volume can then be plugged into the rendering model to synthesize high-quality novel views. We demonstrate scene manipulations including: scene mixing; applying rigid and non-rigid transformations; inserting, moving and deleting objects in a scene; while producing photo-realistic novel-view synthesis results.

  • Book Chapter
  • 10.3233/faia251221
JOIG: Joint Optimization Model of Image Features and Constraint Geometry Fusion for Generalizable Gaussian
  • Oct 21, 2025
  • Yang Shuo + 3 more

Novel View Synthesis (NVS) seeks to generate realistic novel views from limited source images, offering an effective solution for 3D reconstruction in complex or unknown environments. Achieving high generalization under occlusion, varying illumination, and sparse observations remains challenging, largely hinging on the effective extraction, optimization, and fusion of image features and spatial geometry. In this work, we propose JOIG — a Joint Optimization Model of Image Features and Constraint Geometry Fusion for generalizable 3D Gaussian splatting. JOIG introduces three key components: Multiscale Dimension Rotation Fusion (MDRF) to capture intrinsic dependencies across feature dimensions for enhanced image encoding, Geometry Self-Correcting Aggregation (GSCA) to refine multi-view geometry with depth-guided reweighting, and Geometry-Image Feature Aggregation (GIFA) to achieve pixel-aligned fusion of spatial and image information. Extensive experiments on DTU, LLFF, NeRF Synthetic, and Tanks and Temples datasets demonstrate that JOIG achieves state-of-the-art generalization performance, significantly improving both quantitative metrics and visual fidelity in novel view synthesis.

  • Research Article
  • 10.1111/cgf.70148
Self‐Calibrating Fisheye Lens Aberrations for Novel View Synthesis
  • Apr 8, 2025
  • Computer Graphics Forum
  • Jinhui Xiang + 4 more

Neural rendering techniques, such as Neural Radiance Fields (NeRF) and 3D Gaussian Splatting (3D‐GS), have led to significant advancements in scene reconstruction and novel view synthesis (NVS). These methods assume the use of an ideal pinhole model, which is free from lens distortion and optical aberrations. However, fisheye lenses introduce unavoidable aberrations due to their wide‐angle design and complex manufacturing, leading to multi‐view inconsistencies that compromise scene reconstruction quality. In this paper, we propose an end‐to‐end framework that integrates a standard 3D reconstruction pipeline with our lens aberration model to simultaneously calibrate lens aberrations and reconstruct 3D scenes. By modelling the real imaging process and jointly optimising both tasks, our framework eliminates the impact of aberration‐induced inconsistencies on reconstruction. Additionally, we propose a curriculum learning approach that ensures stable optimisation and high‐quality reconstruction results, even in the presence of multiple aberrations. To address the limitations of existing benchmarks, we introduce AbeRec, a dataset composed of scenes captured with lenses exhibiting severe aberrations. Extensive experiments on both existing public datasets and our proposed dataset demonstrate that our method not only significantly outperforms previous state‐of‐the‐art methods on fisheye lenses with severe aberrations but also generalises well to scenes captured by non‐fisheye lenses. Code and datasets are available at https://github.com/CPREgroup/Calibrating‐Fisheye‐Lens‐Aberration‐for‐NVS.

  • Conference Article
  • Cite Count Icon 11
  • 10.1109/cvpr46437.2021.00955
Self-Supervised Visibility Learning for Novel View Synthesis
  • Jun 1, 2021
  • Yujiao Shi + 2 more

We address the problem of novel view synthesis (NVS) from a few sparse source view images. Conventional image-based rendering methods estimate scene geometry and synthesize novel views in two separate steps. However, erroneous geometry estimation will decrease NVS performance as view synthesis highly depends on the quality of estimated scene geometry. In this paper, we propose an end-to-end NVS framework to eliminate the error propagation issue. To be specific, we construct a volume under the target view and design a source-view visibility estimation (SVE) module to determine the visibility of the target-view voxels in each source view. Next, we aggregate the visibility of all source views to achieve a consensus volume. Each voxel in the consensus volume indicates a surface existence probability. Then, we present a soft ray-casting (SRC) mechanism to find the most front surface in the target view (i.e., depth). Specifically, our SRC traverses the consensus volume along viewing rays and then estimates a depth probability distribution. We then warp and aggregate source view pixels to synthesize a novel view based on the estimated source-view visibility and target-view depth. At last, our network is trained in an end-to-end self-supervised fashion, thus significantly alleviating error accumulation in view synthesis. Experimental results demonstrate that our method generates novel views in higher quality compared to the state-of-the-art.

  • Conference Article
  • Cite Count Icon 1
  • 10.1117/12.2032815
3D reconstruction methods using line-scanning microscopy with a linear sensor
  • Jun 17, 2013
  • Milton P Macedo + 1 more

Line-scanning microscopy is a technique with ability to deliver images with an higher acquisition rate than confocal microscopy. But it is accomplished at expense of the degradation of resolution for details parallel to sensor if slit detectors are used. With a linear image sensor it is possible to attenuate or even cancel this effect through the use of information stored in each pixel / light distribution across line pixels of the sensor. In spite of its great potential the use of linear image sensors and in particular the development of three-dimensional (3D) reconstruction methods that take into account its specificity is scarce. This led to our motivation to build a laboratory prototype of a bench stage-scanning microscope using a linear image sensor. We aim at improving lateral resolution isotropy but also image visualization and 3D mesh reconstruction using different optical setups particularly illumination modes, e.g., widefield and line-illumination. The versatility of the laboratory prototype namely its software for image acquisition, processing and visualization is important to attain this goal in the sense that it provides excellent means to develop and test algorithms. Several algorithms for 3D reconstruction were developed and are presented and discussed in this paper. Results of the application of these 3D reconstruction methods show the improvements on lateral resolution isotropy and depth discrimination achieved using algorithms integrating sensor geometry or spatial sampling rate. Also it is evidenced the impact of an insufficient spatial sampling rate from 3D mesh reconstructions.

More from: IEEE transactions on visualization and computer graphics
  • New
  • Research Article
  • 10.1109/tvcg.2025.3628181
Untangling Rhetoric, Pathos, and Aesthetics in Data Visualization.
  • Nov 7, 2025
  • IEEE transactions on visualization and computer graphics
  • Verena Prantl + 2 more

  • New
  • Research Article
  • 10.1109/tvcg.2025.3616756
Selection at a Distance Through a Large Transparent Touch Screen.
  • Nov 1, 2025
  • IEEE transactions on visualization and computer graphics
  • Sebastian Rigling + 4 more

  • New
  • Research Article
  • 10.1109/tvcg.2025.3610275
IEEE ISMAR 2025 Introducing the Special Issue
  • Nov 1, 2025
  • IEEE Transactions on Visualization and Computer Graphics
  • Han-Wei Shen + 2 more

  • New
  • Research Article
  • 10.1109/tvcg.2025.3610302
IEEE ISMAR 2025 Science &amp; Technology Program Committee Members for Journal Papers
  • Nov 1, 2025
  • IEEE Transactions on Visualization and Computer Graphics

  • New
  • Research Article
  • 10.1109/tvcg.2025.3616749
HAT Swapping: Virtual Agents as Stand-Ins for Absent Human Instructors in Virtual Training.
  • Nov 1, 2025
  • IEEE transactions on visualization and computer graphics
  • Jingjing Zhang + 8 more

  • New
  • Research Article
  • 10.1109/tvcg.2025.3620888
IEEE Transactions on Visualization and Computer Graphics: 2025 IEEE International Symposium on Mixed and Augmented Reality
  • Nov 1, 2025
  • IEEE Transactions on Visualization and Computer Graphics

  • New
  • Research Article
  • 10.1109/tvcg.2025.3610274
IEEE ISMAR 2025 Steering Committee Members
  • Nov 1, 2025
  • IEEE Transactions on Visualization and Computer Graphics

  • New
  • Research Article
  • 10.1109/tvcg.2025.3610303
IEEE ISMAR 2025 Paper Reviewers for Journal Papers
  • Nov 1, 2025
  • IEEE Transactions on Visualization and Computer Graphics

  • New
  • Research Article
  • 10.1109/tvcg.2025.3610261
Table of Contents
  • Nov 1, 2025
  • IEEE Transactions on Visualization and Computer Graphics

  • New
  • Research Article
  • 10.1109/tvcg.2025.3627644
A Utility-Aware Privacy-Preserving Method for Trajectory Publication.
  • Oct 31, 2025
  • IEEE transactions on visualization and computer graphics
  • Ziliang Wu + 7 more

Save Icon
Up Arrow
Open/Close
  • Ask R Discovery Star icon
  • Chat PDF Star icon

AI summaries and top papers from 250M+ research sources.

Search IconWhat is the difference between bacteria and viruses?
Open In New Tab Icon
Search IconWhat is the function of the immune system?
Open In New Tab Icon
Search IconCan diabetes be passed down from one generation to the next?
Open In New Tab Icon