Year Year arrow
arrow-active-down-0
Publisher Publisher arrow
arrow-active-down-1
Journal
1
Journal arrow
arrow-active-down-2
Institution Institution arrow
arrow-active-down-3
Institution Country Institution Country arrow
arrow-active-down-4
Publication Type Publication Type arrow
arrow-active-down-5
Field Of Study Field Of Study arrow
arrow-active-down-6
Topics Topics arrow
arrow-active-down-7
Open Access Open Access arrow
arrow-active-down-8
Language Language arrow
arrow-active-down-9
Filter Icon Filter 1
Year Year arrow
arrow-active-down-0
Publisher Publisher arrow
arrow-active-down-1
Journal
1
Journal arrow
arrow-active-down-2
Institution Institution arrow
arrow-active-down-3
Institution Country Institution Country arrow
arrow-active-down-4
Publication Type Publication Type arrow
arrow-active-down-5
Field Of Study Field Of Study arrow
arrow-active-down-6
Topics Topics arrow
arrow-active-down-7
Open Access Open Access arrow
arrow-active-down-8
Language Language arrow
arrow-active-down-9
Filter Icon Filter 1
Export
Sort by: Relevance
  • New
  • Research Article
  • 10.1111/cgf.70309
GauComp: 3D Gaussian Completion for Associated Shadow and Object Removal
  • Feb 4, 2026
  • Computer Graphics Forum
  • Wenxing Zheng + 4 more

Abstract Recent advancements in scene editing based on 3D Gaussian splatting techniques have achieved significant progress. However, existing scene editing methods still face critical limitations. One notable drawback is the inability to achieve cascading effect propagation—i.e., the automatic and consistent updating of associated elements such as shadows in response to object manipulations. Furthermore, most current methods perform object editing directly on 2D images, leading to inconsistencies across inputs from multiple viewpoints. In this paper, we present a novel pipeline named GauComp, focusing on address the task of object removal in 3D Gaussian‐based scenes, with automatic removal of associated shadows. The proposed method comprises three core stages: Shadow‐aware 3D Segmentation, Gaussian Completion, and Gaussian Refine. The Shadow‐aware 3D Segmentation stage establishes bidirectional associations between objects and shadows, enabling robust 3D scene segmentation by integrating 3D semantic modelling with instance‐level shadow detection. The Gaussian Completion stage leverages Normal‐aware PatchMatch to search for similar gaussian primitives from the same scene for 3D Gaussian completion to ensure geometric consistency during scene editing. Finally, the Gaussian Refine stage improves the quality of scene restoration by constraining the Gaussian optimization process, effectively suppressing errors such as false cloning and incorrect splitting. Experimental results demonstrate that GauComp significantly enhances restoration quality, completing scene editing and restoration within 54 seconds, thereby providing an efficient and high‐fidelity solution for 3D scene editing and restoration.

  • Open Access Icon
  • Research Article
  • 10.1111/cgf.70296
SDFs from Unoriented Point Clouds using Neural Variational Heat Distances
  • Jan 6, 2026
  • Computer Graphics Forum
  • Samuel Weidemaier + 5 more

Abstract We propose a novel variational approach for computing neural Signed Distance Fields (SDF) from unoriented point clouds. To this end, we replace the commonly used eikonal equation with the heat method, carrying over to the neural domain what has long been standard practice for computing distances on discrete surfaces. This yields two convex optimisation problems for whose solution we employ neural networks: We first compute a neural approximation of the gradients of the unsigned distance field through a small time step of heat flow with weighted point cloud densities as initial data. Then, we use it to compute a neural approximation of the SDF. We prove that the underlying variational problems are well‐posed. Through numerical experiments, we demonstrate that our method provides state‐of‐the‐art surface reconstruction and consistent SDF gradients. Furthermore, we show in a proof‐of‐concept that it is accurate enough for solving a PDE on the zero‐level set.

  • Research Article
  • 10.1111/cgf.70298
Improving the Watertightness of Parametric Surface/Surface Intersection
  • Dec 27, 2025
  • Computer Graphics Forum
  • Yuqing Wang + 5 more

Abstract The parametric surface/surface intersection (SSI) computation serves as a fundamental component in geometric modelling kernels for computer‐aided design (CAD) systems. The geometric fidelity of intersection curves—particularly whether the computed intersection loci in the two parametric domains (‐curves) under the two surface maps agree with the true intersection curve in the modelling space—determines the watertightness of the surface trimming. Despite abundant research and industrial developments for SSI algorithms, ensuring the watertightness of the intersection remains challenging, which directly impacts the stability and reliability of the modelling systems. In this paper, we present a practical algorithm for computing parametric SSI with gap control between the maps in the modelling space of the two ‐curves. We first analyse the topology of the two ‐curves by solving lower‐dimensional systems of equations and build a graph in each domain representing the topology. Then we refine the graphs through adaptive edge subdivision and construct initial approximation of ‐curves by interpolation. A constrained optimization framework incorporating distance and tangential information is employed to improve accuracy and minimize gaps. We demonstrate the effectiveness of our algorithm through extensive experiments and comparisons with the intersection package in the open source software OCCT and the commercial engine ACIS.

  • Open Access Icon
  • Research Article
  • 10.1111/cgf.70287
NePO: Neural Point Octrees for Large‐Scale Novel View Synthesis
  • Dec 24, 2025
  • Computer Graphics Forum
  • Noah Lewis + 3 more

Abstract Point‐based radiance field rendering produces impressive results for novel‐view synthesis tasks. Established methods work with object‐centric datasets or room‐sized scenes, as computational resources and model capabilities are limited. To overcome this limitation, we introduce neural point octrees (NePOs) to radiance field rendering, which enables optimisation and rendering of large‐scale datasets at varying detail levels, including different acquisition modalities, such as camera drones and LiDAR vehicles. Our method organises input point clouds into an octree from the bottom up, enabling level of detail (LOD) selection during rendering. Appearance descriptors for each point are optimised using the RGB captures, enabling our system to self‐refine and address real‐world challenges such as capture coverage discrepancies and SLAM pose drift. The refinement is achieved by adaptively densifying octree nodes during training and optimising camera poses via gradient descent. Overall, our approach efficiently optimises scenes with thousands of images and renders scenes containing hundreds of millions of points in real time.

  • Open Access Icon
  • Research Article
  • 10.1111/cgf.70301
EvolvED: Evolutionary Embeddings to Understand the Generation Process of Diffusion Models
  • Dec 24, 2025
  • Computer Graphics Forum
  • Vidya Prasad + 5 more

Abstract Diffusion models, widely used in image generation, rely on iterative refinement to produce images from noise. Understanding this data evolution supports model development and interpretability, yet is challenging due to its high‐dimensional, iterative nature. Prior works often focus on static or instance analyses, missing the iterative and holistic aspects of the generative space. While dimensionality reduction can visualise image evolution for some instances, it does not preserve the iterative structure. To address these gaps, we introduce EvolvED, a method that presents a holistic view of the iterative generative process in diffusion models. EvolvED goes beyond instance analysis, leveraging predefined analysis goals to streamline generative space exploration. User‐defined prompts aligned with these goals extract intermediate images, preserving the iterative context. Relevant feature extractors are used to trace the evolution of key image attributes, addressing the complexity of high‐dimensional outputs. Central to EvolvED is a novel evolutionary embedding algorithm, explicitly encoding iterations while preserving semantic and evolutionary relations of these encoded representations. It clusters semantically similar elements via a t‐SNE loss per iteration, introduces a displacement loss to represent iterations in distinct predefined spatial regions, and an alignment loss for continuity across iterations. This embedding is presented as rectilinear and radial layouts. We apply EvolvED to models like GLIDE and Stable Diffusion, demonstrating its ability to provide valuable insights into the generative process.

  • Research Article
  • 10.1111/cgf.70299
Preserving Photographic Defocus in Stylised Image Synthesis
  • Dec 10, 2025
  • Computer Graphics Forum
  • Hong‐Yi Wang + 1 more

Abstract While style transfer has been extensively studied, most existing approaches fail to account for the defocus effects inherent in content images, thereby compromising the photographer's intended focus cues. To overcome this shortcoming, we introduce an optimisation‐based post‐processing framework that restores defocus characteristics to stylised images, regardless of the style transfer technique used. Our method initiates by estimating a blur map through a data‐driven model that predicts pixel‐level blur magnitudes. This blur map subsequently guides a layer‐based defocus rendering framework, which effectively simulates depth‐of‐field (DoF) effects using a Gaussian filter bank. To map the blur values to appropriate kernel sizes in the filter bank, we introduce a neural network that determines the optimal maximum filter size, ensuring both content integrity and stylistic fidelity. Experimental results, both quantitative and qualitative, show that our method significantly improves stylised images by preserving the original depth cues and defocus details.

  • Research Article
  • 10.1111/cgf.70297
Robust Differentiable Sketch Rendering for Single‐View 3D Reconstruction
  • Dec 4, 2025
  • Computer Graphics Forum
  • Aobo Jin + 3 more

Abstract In this paper, we propose a novel end‐to‐end method to model 3D objects with geometric details from a single‐view sketch input. Specifically, a novel deep learning‐based differentiable sketch renderer is introduced to establish the relationship between geometric features, represented by normal maps, and 2D sketch strokes. Then, building upon this renderer, we design algorithms to automatically create 3D models with geometric details from a single‐view sketch. With the aid of two introduced loss functions: one based on silhouette‐derived confidence maps and the other on regression similarities, our framework supports the gradient of loss functions calculated between the rendered sketch and input sketch back‐propagating through the whole architecture, thereby enhancing the geometric details on the frontal surface of the generated 3D object. Through comparisons with state‐of‐the‐art sketch‐based 3D modelling techniques, our approach demonstrates superior capability in generating plausible geometric shapes and details, without the necessity for semantic annotations within the input sketch.

  • Research Article
  • 10.1111/cgf.70294
Adaptive Use of LBO Bases by Shape Feature Scales for High‐Quality and Efficient Shape Correspondence
  • Nov 30, 2025
  • Computer Graphics Forum
  • Chong Zhao + 2 more

Abstract Bases from the eigenfunctions of the Laplace–Beltrami operator (LBO), called LBO bases, are popularly used to construct functional mappings for shape correspondence. Although many efforts have been made to improve LBO basis construction and their application in shape correspondence, they often overlook the role of shape feature scales in determining the suitability of LBO bases. This mismatch between the selected LBO bases and shape features results in poor representation, hindering shape correspondence and requiring more iterations for convergence, ultimately reducing efficiency. In this paper, we present an attention‐based module that adaptively learns weights based on the scales of shape features to better utilise LBO bases. This ensures that the selected LBO bases have frequencies that align with the scales of shape features, addressing the mismatch problem and improving feature representation. By filtering out LBO bases with incompatible frequencies, our approach enhances shape correspondence while reducing the number of iterations required for convergence, thereby improving efficiency. Additionally, the selected LBO bases can be easily integrated with existing methods, such as the test‐time adaptation strategy, to further enhance shape correspondence. Experimental results demonstrate that our method achieves higher‐quality results than state‐of‐the‐art methods while at a high efficiency.

  • Research Article
  • 10.1111/cgf.70295
Digitisation of Impasto and Gloss in Oil Paintings via Spatially Varying Bidirectional Reflectance Distribution Function Acquisition
  • Nov 29, 2025
  • Computer Graphics Forum
  • Chih Yang + 1 more

Abstract The growth of information technology and the Internet has increased the demand for online art exhibitions. As the digitisation of artworks often requires highly customised equipment and techniques, this study proposes a practical method for obtaining spatially varying bidirectional reflectance distribution function parameters for oil paintings with rich impasto and varying gloss. We combined the photometric stereo algorithm with a deep learning model, which was trained based on real oil painting samples. The proposed method surpasses current inverse rendering and pure deep learning methods that are limited to specific materials or synthetic data. Our system effectively reproduced the nonhomogeneous nature of oil paintings by capturing normal vectors, albedo, roughness, and specular intensity for each pixel. This approach provides a practical solution for digitising oil paintings, enabling the reproduction of impastos and glossy appearances in virtual environments.

  • Open Access Icon
  • Research Article
  • 10.1111/cgf.70293
FluidMap: Proportional and Spatially Consistent Layout Enrichments in Multidimensional Projections
  • Nov 26, 2025
  • Computer Graphics Forum
  • Daniela Blumberg + 5 more

Abstract Layout enrichment methods for multidimensional projections aim to enhance 2D scatterplots with additional information. We address the representation of categorical attributes with respect to a numerical feature, like the importance of a data point or its frequency, by colouring the scatterplots' background. However, applying existing space‐filling methods has limitations: Voronoi partitionings, including weighted variants, do not correctly account for the relative weight of data points, resulting in disproportionately small or large areas, depending on the data point density. Neighbourhood Treemaps (Nmap) preserve the relative size of areas given data point weights but are restricted to rectangular shapes, often positioned far from the associated data points. To address these issues, we propose FluidMap , a space‐filling layout enrichment inspired by fluid dynamics. Our algorithm simulates the behaviour of coloured fluids spreading under pressure, with projected data points serving as sources and weights determining the amount of fluid to be distributed. FluidMap generates flexibly shaped areas that maintain sizes proportional to data point weights and include their assigned data points. We compare our method to Voronoi‐based techniques and Nmap by quantifying their visual properties. Additionally, through an expert study, we assess task‐specific differences. Our method outperforms existing techniques in preserving proportional representation and spatial consistency simultaneously.