Year Year arrow
arrow-active-down-0
Publisher Publisher arrow
arrow-active-down-1
Journal
1
Journal arrow
arrow-active-down-2
Institution Institution arrow
arrow-active-down-3
Institution Country Institution Country arrow
arrow-active-down-4
Publication Type Publication Type arrow
arrow-active-down-5
Field Of Study Field Of Study arrow
arrow-active-down-6
Topics Topics arrow
arrow-active-down-7
Open Access Open Access arrow
arrow-active-down-8
Language Language arrow
arrow-active-down-9
Filter Icon Filter 1
Year Year arrow
arrow-active-down-0
Publisher Publisher arrow
arrow-active-down-1
Journal
1
Journal arrow
arrow-active-down-2
Institution Institution arrow
arrow-active-down-3
Institution Country Institution Country arrow
arrow-active-down-4
Publication Type Publication Type arrow
arrow-active-down-5
Field Of Study Field Of Study arrow
arrow-active-down-6
Topics Topics arrow
arrow-active-down-7
Open Access Open Access arrow
arrow-active-down-8
Language Language arrow
arrow-active-down-9
Filter Icon Filter 1
Export
Sort by: Relevance
  • Open Access Icon
  • Research Article
  • 10.1016/j.vrih.2024.08.006
Mesh representation matters: investigating the influence of different mesh features on perceptual and spatial fidelity of deep 3D morphable models
  • Oct 1, 2024
  • Virtual Reality & Intelligent Hardware
  • Robert Kosk + 5 more

BackgroundDeep 3D morphable models (deep 3DMMs) play an essential role in computer vision. They are used in facial synthesis, compression, reconstruction and animation, avatar creation, virtual try-on, facial recognition systems and medical imaging. These applications require high spatial and perceptual quality of synthesised meshes. Despite their significance, these models have not been compared with different mesh representations and evaluated jointly with point-wise distance and perceptual metrics. MethodsWe compare the influence of different mesh representation features to various deep 3DMMs on spatial and perceptual fidelity of the reconstructed meshes. This paper proves the hypothesis that building deep 3DMMs from meshes represented with global representations leads to lower spatial reconstruction error measured with L1 and L2 norm metrics and underperforms on perceptual metrics. In contrast, using differential mesh representations which describe differential surface properties yields lower perceptual FMPD and DAME and higher spatial fidelity error. The influence of mesh feature normalisation and standardisation is also compared and analysed from perceptual and spatial fidelity perspectives. ResultsThe results presented in this paper provide guidance in selecting mesh representations to build deep 3DMMs accordingly to spatial and perceptual quality objectives and propose combinations of mesh representations and deep 3DMMs which improve either perceptual or spatial fidelity of existing methods.

  • Open Access Icon
  • Research Article
  • Cite Count Icon 2
  • 10.1016/j.vrih.2024.06.002
Co-salient object detection with iterative purification and predictive optimization
  • Oct 1, 2024
  • Virtual Reality & Intelligent Hardware
  • Yang Wen + 4 more

  • Open Access Icon
  • Research Article
  • 10.1016/j.vrih.2024.06.005
CURDIS: A template for incremental curve discretization algorithms and its application to conics
  • Oct 1, 2024
  • Virtual Reality & Intelligent Hardware
  • Philippe Latour + 1 more

  • Open Access Icon
  • Research Article
  • Cite Count Icon 2
  • 10.1016/j.vrih.2024.06.004
Music-stylized hierarchical dance synthesis with user control
  • Oct 1, 2024
  • Virtual Reality & Intelligent Hardware
  • Yanbo Cheng + 2 more

  • Open Access Icon
  • Research Article
  • 10.1016/j.vrih.2024.06.003
Pre-training transformer with dual-branch context content module for table detection in document images
  • Oct 1, 2024
  • Virtual Reality & Intelligent Hardware
  • Yongzhi Li + 4 more

  • Open Access Icon
  • Research Article
  • Cite Count Icon 16
  • 10.1016/j.vrih.2023.06.012
Robust blind image watermarking based on interest points
  • Aug 1, 2024
  • Virtual Reality & Intelligent Hardware
  • Zizhuo Wang + 5 more

  • Open Access Icon
  • Research Article
  • Cite Count Icon 26
  • 10.1016/j.vrih.2023.06.005
S2ANet: Combining local spectral and spatial point grouping for point cloud processing
  • Aug 1, 2024
  • Virtual Reality & Intelligent Hardware
  • Yujie Liu + 3 more

  • Open Access Icon
  • Research Article
  • Cite Count Icon 12
  • 10.1016/j.vrih.2023.06.010
Generating animatable 3D cartoon faces from single portraits
  • Aug 1, 2024
  • Virtual Reality & Intelligent Hardware
  • Chuanyu Pan + 3 more

  • Open Access Icon
  • Research Article
  • Cite Count Icon 11
  • 10.1016/j.vrih.2023.06.002
MKEAH: Multimodal knowledge extraction and accumulation based on hyperplane embedding for knowledge-based visual question answering
  • Aug 1, 2024
  • Virtual Reality & Intelligent Hardware
  • Heng Zhang + 8 more

  • Open Access Icon
  • Research Article
  • Cite Count Icon 23
  • 10.1016/j.vrih.2023.06.011
Multi-scale context-aware network for continuous sign language recognition
  • Aug 1, 2024
  • Virtual Reality & Intelligent Hardware
  • Senhua Xue + 3 more