Abstract
For an intelligent transportation system, identifying license plate numbers in drone photos is difficult, and it is used in practical applications like parking management, traffic management, automatically organizing parking spots, etc. The primary goal of the work that is being presented is to demonstrate how to extract robust and invariant features from PCM that can withstand the difficulties posed by drone images. After that, the work will take advantage of a fully connected neural network to tackle the difficulties of fixing precise bounding boxes regardless of orientations, shapes, and text sizes. The proposed work will be able to find the detected text for both license plate numbers and natural scene images which will lead to a better recognition stage. Both our drone dataset (Mimos) and the benchmark license plate dataset (Medialab) are used to assess the effectiveness of the study that has been done. To show that the suggested system can detect text of natural scenes in a wide variety of situations. Four benchmark datasets, namely, SVT, MSRA-TD-500, ICDAR 2017 MLT, and Total Text are used for the experimental results. We also describe trials that demonstrate robustness to varying height distances and angles. This work's code and data will be made publicly available on GitHub.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.