Abstract
With the rapid development of technologies based on virtual reality, image stitching is widely used in various fields such as broadcasting, games, education, and architecture. Image stitching is a method for connecting multiple images to produce a high-resolution image and a wide field of view image. It is common for most of the stitching methods to find and match the feature in the image. However, these stitching methods have the disadvantage that they cannot create a perfect 360-degree panoramic image because the depth of the projected area varies depending on the position and direction between adjacent cameras. Therefore, we propose an advanced stitching method to improve the deviation due to the difference in the depth of each area using the pixel value of the input image after the feature-based stitching. After the feature-based stitching method has been performed, the pixel values of overlapping areas in the image are calculated as an optical flow algorithm, then finely distorted, and then corrected to ensure that the image overlaps correctly. Through experiments, it was confirmed that the problem that was deviated from the feature-based stitching was solved. Besides, as a result of performance evaluation, it was proved that the proposed stitching method using an optical flow algorithm is capable of real-time and fast service.
Highlights
With the rapid development of VR technology, there are higher continuous requirements for enhanced technologies on image stitching.[1,2] VR services are currently being provided in various fields with the emergence of 5G services, but VR does not spread as fast as expected because no killer content makes people buy VR equipment.[3]
Image stitching is a method for matching multiple images to produce a high-resolution image and a wide field of view image and it has been studied to develop various algorithms over the last few years.[4,5,6,7,8,9,10,11,12,13]
Motion fields obtained through the Optical Flow algorithm exist in pixel units, and if all pixel values move a certain amount, it is possible to know whether they are in the same position
Summary
With the rapid development of VR technology, there are higher continuous requirements for enhanced technologies on image stitching.[1,2] VR services are currently being provided in various fields with the emergence of 5G services, but VR does not spread as fast as expected because no killer content makes people buy VR equipment.[3]. The existing feature-based stitching method has the disadvantage of not being able to produce a more accurate and detailed 360panorama because the depth of the projecting area and the center point position is different between cameras according to the position and direction between adjacent cameras. The moment the change is made from 3D to 2D, depth (z-axis) information in 3D disappears Because of this problem, if matching is performed based on the features existing in the original area, it becomes difficult to match the features existing in the square area. It is difficult to accurately match each feature, but an intermediate point where all features can be matched as much as possible is found To solve this problem, in this paper, we propose an improved stitching method based on an optical flow algorithm that can more accurately match each feature. Conclusion and future work are provided in the final section
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
More From: International Journal of Engineering Business Management
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.