Abstract
For the synthesis of ultra-large scene and ultra-high resolution videos, in order to obtain high-quality large-scene videos, high-quality video stitching and fusion are achieved through multi-scale unstructured array cameras. This paper proposes a network model image feature point extraction algorithm based on symmetric auto-encoding and scale feature fusion. By using the principle of symmetric auto-encoding, the hierarchical restoration of image feature location information is incorporated into the corresponding scale feature, along with deep separable convolution image feature extraction, which not only improves the performance of feature point detection but also significantly reduces the computational complexity of the network model. Based on the calculated high-precision feature point pairing information, a new image localization method is proposed based on area ratio and homography matrix scaling, which improves the speed and accuracy of the array camera image scale alignment and positioning, realizes high-definition perception of local details in large scenes, and obtains clearer synthesis effects of large scenes and high-quality stitched images. The experimental results show that the feature point extraction algorithm proposed in this paper has been experimentally compared with four typical algorithms using the HPatches dataset. The performance of feature point detection has been improved by an average of 4.9%, the performance of homography estimation has been improved by an average of 2.5%, the amount of computation has been reduced by 18%, the number of network model parameters has been reduced by 47%, and the synthesis of billion-pixel videos has been achieved, demonstrating practicality and robustness.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.