Abstract

Context. Target recognition is a priority in military affairs. This task is complicated by the fact that it is necessary to recognize moving objects, different terrain and landscape create obstacles for recognition. Combat actions can take place at different times of the day, accordingly, it is necessary to take into account the perspective of lighting and general lighting. It is necessary to detect the object in the video by segmenting the video frames, recognize and classify. Objective of the study is to develop a technology for the analysis of the development of a technology for recognizing targets in real time as a component of the fire control system, due to the use of artificial intelligence, YOLO and machine learning. Method. The article develops a video stream analysis technology for automatic target recognition of the fire control system based on machine learning. The paper proposes the development of a target recognition module as a component of the fire control system within the framework of the proposed information technology using artificial intelligence. The YOLOv8 pattern recognition model family was used to develop the target recognition module. The methods used during the study of the formed dataset. – Bounding Box: Noise – Up to 15% of pixels (limiting frame: adding salt and pepper noise to the image – up to 15% of pixels). – Bounding Box: Blur – Up to 2.5px (bounding box: adding Gaussian blur to the image – up to 2.5 pixels). – Cutout – 3 boxes with 10% size each (cut out a part of the image – 3 boxes of 10% size each). – Brightness Between –25% and +25% (changing the brightness of the image to increase the resistance of the model to changes in lighting and camera settings – from –25% to +25%). – Rotation – Between –15 and +15 (rotation of the image object – clockwise or counterclockwise by degrees from –15 to +15). – Flip – Horizontal (flip the image object horizontally). Results. The data is collected from open sources, in particular, from videos posted in open sources on the YouTube platform. The main task of data preprocessing is the classification of three classes of objects on video or in real time – APC, BMP and TANK. The dataset is formed using the Roboflow platform based on the labeling tools and subsequently the augmentation tools. The dataset consists of 1193 unique images – approximately equally for each class. The training was conducted using Google Colab resources. It took 100 epochs to train the model. Conclusions. Analysis is performed according to mAP50 (average precision as 0.85), mAP50-95 (0.6), precision (0.89) and recall (0.75). Big losses are due to the fact that the background was not taken into account during the research – training the module on the basis of confirmed data (images) of the background without technology

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.