Geometric image transformations are fundamental to image processing, computer vision and graphics, with critical applications to pattern recognition and facial identification. The splitting-integrating method (SIM) is well suited to the inverse transformation T−1 of digital images and patterns, but it encounters difficulties in nonlinear solutions for the forward transformation T. We propose improved techniques that entirely bypass nonlinear solutions for T, simplify numerical algorithms and reduce computational costs. Another significant advantage is the greater flexibility for general and complicated transformations T. In this paper, we apply the improved techniques to the harmonic, Poisson and blending models, which transform the original shapes of images and patterns into arbitrary target shapes. These models are, essentially, the Dirichlet boundary value problems of elliptic equations. In this paper, we choose the simple finite difference method (FDM) to seek their approximate transformations. We focus significantly on analyzing errors of image greyness. Under the improved techniques, we derive the greyness errors of images under T. We obtain the optimal convergence rates O(H2)+O(H/N2) for the piecewise bilinear interpolations (μ=1) and smooth images, where H(≪1) denotes the mesh resolution of an optical scanner, and N is the division number of a pixel split into N2 sub-pixels. Beyond smooth images, we address practical challenges posed by discontinuous images. We also derive the error bounds O(Hβ)+O(Hβ/N2), β∈(0,1) as μ=1. For piecewise continuous images with interior and exterior greyness jumps, we have O(H)+O(H/N2). Compared with the error analysis in our previous study, where the image greyness is often assumed to be smooth enough, this error analysis is significant for geometric image transformations. Hence, the improved algorithms supported by rigorous error analysis of image greyness may enhance their wide applications in pattern recognition, facial identification and artificial intelligence (AI).
Read full abstract