Abstract

Stitched images can offer a broader field of view, but their boundaries can be irregular and unpleasant. To address this issue, current methods for rectangling images start by distorting local grids multiple times to obtain rectangular images with regular boundaries. However, these methods can result in content distortion and missing boundary information. We have developed an image rectangling solution using the reparameterized transformer structure, focusing on single distortion. Additionally, we have designed an assisted learning network to aid in the process of the image rectangling network. To improve the network’s parallel efficiency, we have introduced a local thin-plate spline Transform strategy to achieve efficient local deformation. Ultimately, the proposed method achieves state-of-the-art performance in stitched image rectangling with a low number of parameters while maintaining high content fidelity. The code is available at https://github.com/MelodYanglc/TransRectangling.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.