In Indonesia, Tea is an important economic crop that is widely grown, and in many countries, accurate mapping of tea plantations is essential for the operation, management, and monitoring of the growth and development of the tea industry. We propose a classification of tea plantations using orthomosaics from aerial images based on the Convolutional Neural Network (CNN) which identifies the condition of the tea plantations with the parameters observed, namely the condition of the tea leaves, estimated yields achieved, and monitoring of treeless areas caused by tree death. In this study, we took a sample of 20 hectares. We classify images based on maps generated by drones in previous studies. Image segmentation is performed to maintain image objects, while an enhanced CNN model is used to extract deep image features. To get complete results, this study uses UAV (Unmanned Aerial Vehicle) imagery as the basis for the map, which is then combined or stacked into one image. The results of the images that are used as maps undergo image classification, where the information contained in the map is mapped and divided according to its type. The area of the tea plantations sampled is 20 ha, and the threshold for the image captured by the UAV is 5% of the total area captured, which is around 1 ha. If the image created by the UAV has an error of more than 5%, then the image does not meet the classification requirements. We determine this margin of error based on the performance of the drone camera capture when capturing Fig. 2, and the resolution used is 4096 x 2160 for each image captured by the drone. We conclude that the proposed method for mapping tea plantations using ultra-high resolution remote sensing imagery is effective and has great potential for mapping tea plantations in areas such as the development of drone aerial photography methods for tea plantations based on image classification for forecasting. tea plantations Image stitching can be used to improve the monitoring of tea plantations and predict harvest time using a classification process. The tea garden map has 5 types of information categorized by harvest time, medium leaf tea, milled tea, tea, and old tea. The success of image recognition shows the error matrix data by testing 123 random points spread over the map, of which 113 random points were identified with an average accuracy of 91.87%, this value is of course very good and exceeds the specified threshold of 75%. When using this method, an error occurs that the colors of similar pixels cannot be distinguished, resulting in an incorrect detection. In addition, the image stitching method using the orthomosaics method has succeeded in performing image stitching and can be well applied to classification using the CNN approach.
Read full abstract