Klasifikasi Hewan Anjing, Kucing, dan Harimau Menggunakan Metode Convolutional Neural Network (CNN)
Animal classification is a complex challenge due to variations in shape, color, and patterns across species. Traditional methods, which rely on manual feature extraction, are often ineffective in handling such complexities. Therefore, this study employs Convolutional Neural Networks (CNNs) as a more accurate approach for automatic feature extraction and image classification. This research aims to develop an animal image classification model, specifically for dogs, cats, and tigers, utilizing CNNs. The dataset consists of 4,800 images obtained from Kaggle, which were divided into training, testing, and validation sets. The CNN model was built using TensorFlow/Keras, trained for 50 epochs, and evaluated using accuracy, precision, recall, F1-score, and a confusion matrix. The experimental results show that the model achieved an overall accuracy of 88%, with the highest performance in tiger classification (99% accuracy). However, distinguishing between dogs and cats remains a challenge, with an accuracy of 81% for both classes. The findings indicate that CNNs are effective in automatically classifying animal images, although challenges persist in differentiating visually similar species. This study lays the groundwork for further enhancements, such as refining the model architecture or utilizing data augmentation techniques to boost classification accuracy.
- Research Article
78
- 10.1002/ctm2.102
- Jun 1, 2020
- Clinical and Translational Medicine
Deep learning-based classification and mutation prediction from histopathological images of hepatocellular carcinoma.
- Research Article
- 10.62617/mcb547
- Nov 20, 2024
- Molecular & Cellular Biomechanics
This study aims to design an epidemic prevention and control bracelet system that integrates convolutional neural network (CNN). This system can collect and process the user’s physiological index data in real time, especially in the remote physical education scene, and provide learners with immediate physiological index feedback and personalized adaptive training suggestions through accurate human action recognition (HAR) technology. One-dimensional acceleration signal is converted into two-dimensional image, and CNN’s powerful feature extraction and classification ability is used to effectively solve the problem that manual feature extraction is complex and nonlinear features are difficult to capture. By considering the joint action trajectory in the time window, a dynamic Recurrence Plot (RP) is constructed to capture the dynamic changes among joints. To input recursive graph data into CNN, it needs to be converted into image form. In the task of HAR, CNN can automatically learn useful features from images without manually designing features. It can not only effectively extract features from images, but also be directly used in classification tasks. Experimental results show that compared with other algorithms, the proposed RP + CNN model has the best performance in action recognition, with an accuracy of 96.89% and a F1 value of 86.76%. RP captures the dynamic patterns and periodic behaviors in time series by visualizing the repeated appearance of system states over time. The RP + CNN model is used to extract and classify human action features, which significantly improves the accuracy and efficiency of HAR. This innovative method not only simplifies the complex process of traditional manual feature extraction, but also enhances the system’s ability to identify nonlinear and complex action patterns, which provides strong technical support for remote physical education.
- Research Article
2
- 10.1186/s12885-024-11962-y
- Mar 1, 2024
- BMC Cancer
ObjectiveThe risk category of gastric gastrointestinal stromal tumors (GISTs) are closely related to the surgical method, the scope of resection, and the need for preoperative chemotherapy. We aimed to develop and validate convolutional neural network (CNN) models based on preoperative venous-phase CT images to predict the risk category of gastric GISTs.MethodA total of 425 patients pathologically diagnosed with gastric GISTs at the authors’ medical centers between January 2012 and July 2021 were split into a training set (154, 84, and 59 with very low/low, intermediate, and high-risk, respectively) and a validation set (67, 35, and 26, respectively). Three CNN models were constructed by obtaining the upper and lower 1, 4, and 7 layers of the maximum tumour mask slice based on venous-phase CT Images and models of CNN_layer3, CNN_layer9, and CNN_layer15 established, respectively. The area under the receiver operating characteristics curve (AUROC) and the Obuchowski index were calculated to compare the diagnostic performance of the CNN models.ResultsIn the validation set, CNN_layer3, CNN_layer9, and CNN_layer15 had AUROCs of 0.89, 0.90, and 0.90, respectively, for low-risk gastric GISTs; 0.82, 0.83, and 0.83 for intermediate-risk gastric GISTs; and 0.86, 0.86, and 0.85 for high-risk gastric GISTs. In the validation dataset, CNN_layer3 (Obuchowski index, 0.871) provided similar performance than CNN_layer9 and CNN_layer15 (Obuchowski index, 0.875 and 0.873, respectively) in prediction of the gastric GIST risk category (All P >.05).ConclusionsThe CNN based on preoperative venous-phase CT images showed good performance for predicting the risk category of gastric GISTs.
- Research Article
- 10.1177/18724981241299605
- Dec 8, 2024
- Intelligent Decision Technologies
Multimodal medical information fusion has emerged as a revolutionary method in intelligent healthcare, allowing complete consideration of patient well-being and tailored treatment strategies. On the other hand, the current approach produces erroneous findings and has problems with the early phases of brain tumour prediction in MRI images. In healthcare, accurate and reliable brain classification of images is essential for diagnosis and strategic decision-making. Currently, semantic gaps are the main problem with brain tumour image classification. To fill the research gap, traditional ML models for classification use handcrafted features, which are low-level yet relatively high-level, and they use intensive approaches for feature extraction and classification. In recent years, substantial improvements have been made in deep learning for automated image classification. Recurrent Neural Networks (RNNs), or deep Convolutional Neural Networks (CNN), have been particularly effective in this multimodal image classification. Hence, this paper presents the Multimodal Fusion Model-assisted Convolutional Neural Network and Recurrent Neural Networks (MFM-CNN-RNN) for automatic image classification in smart healthcare. This study aims to determine if a fusion of CT and MRI brain scans is normal or abnormal. To enhance the accuracy of brain tumour image classification, this method uses the multimodality information within CNNs and RNNs by extracting and fusing unique and complimentary features from different modalities. Within this framework, features have been retrieved using CNN features, while dependencies and classification have been determined using RNN attributes. Because of its design, LSTM excels in time series analysis, which involves processing data in sequential order.
- Front Matter
1
- 10.1016/j.gie.2020.12.008
- Mar 7, 2021
- Gastrointestinal Endoscopy
Artificial intelligence: finding the intersection of predictive modeling and clinical utility
- Research Article
5
- 10.1155/2021/6370509
- Jan 1, 2021
- Computational Intelligence and Neuroscience
HTP test in psychometrics is a widely studied and applied psychological assessment technique. HTP test is a kind of projection test, which refers to the free expression of painting itself and its creativity. Therefore, the form of group psychological counselling is widely used in mental health education. Compared with traditional neural networks, deep learning networks have deeper and more network layers and can learn more complex processing functions. In this stage, image recognition technology can be used as an assistant of human vision. People can quickly get the information in the picture through retrieval. For example, you can take a picture of an object that is difficult to describe and quickly search the content related to it. Convolutional neural network, which is widely used in the image classification task of computer vision, can automatically complete feature learning on the data without manual feature extraction. Compared with the traditional test, the test can reflect the painting characteristics of different groups. After quantitative scoring, it has good reliability and validity. It has high application value in psychological evaluation, especially in the diagnosis of mental diseases. This paper focuses on the subjectivity of HTP evaluation. Convolutional neural network is a mature technology in deep learning. The traditional HTP assessment process relies on the experience of researchers to extract painting features and classification.
- Research Article
- 10.3389/fevo.2024.1363423
- Jun 28, 2024
- Frontiers in Ecology and Evolution
BackgroundCalcareous nannofossils are minute microfossils widely present in marine strata. Their identification holds significant value in studies related to stratigraphic dating, paleo-environmental evolution, and paleoclimate reconstruction. However, the process of identifying these fossils is time consuming, and the discrepancies between the results obtained from different manual identification methods are substantial, hindering quantification efforts. Therefore, it is necessary to explore automated assisted identification of fossil species. This study mainly focused on 18 key fossil species from the Miocene era. Five convolutional neural network (CNN) models and 10 data augmentation techniques were compared. These models and techniques were employed to analyze and collectively train two- and three-dimensional fossil morphologies and structures obtained from three different fossils observed under single-polarized light microscopy, orthogonal polarized light microscopy, and scanning electron microscopy. Finally, the model performance was evaluated based on the predictive outcomes on the test set, using metrics such as confusion matrix and top-k accuracy. ResultThe results indicate that, for the calcareous nannofossil images, the most effective data augmentation approach is a combination of four methods: random rotation, random mirroring, random brightness, and gamma correction. Among the CNN models, DenseNet121 exhibits the optimal performance, achieving an identification accuracy of 94.56%. Moreover, this model can distinguish other fossils beyond the 18 key fossil species and non-fossil debris. Based on the confusion matrix, the evaluation results reveal that the model has strong generalization capability and outputs highly credible identification results.ConclusionDrawing on the identification results from CNN, this study asserts a robust correlation among extinction photographs, planar images, and stereoscopic morphological images of fossil species. Collective training facilitates the joint extraction and analysis of fossil features under different imaging methods. CNN demonstrates many advantages in the identification of calcareous nannofossils, offering convenience to researchers in various fields, such as stratigraphy, paleo-ecology, paleoclimate, and paleo-environments of ancient oceans. It has great potential for advancing the development of marine surveys and stratigraphic recognition processes in the future.
- Research Article
159
- 10.1016/j.aei.2021.101406
- Sep 7, 2021
- Advanced Engineering Informatics
An improved convolutional neural network with an adaptable learning rate towards multi-signal fault diagnosis of hydraulic piston pump
- Conference Article
103
- 10.1109/dicta.2016.7797053
- Nov 1, 2016
This paper presents the impact of automatic feature extraction used in a deep learning architecture such as Convolutional Neural Network (CNN). Recently CNN has become a very popular tool for image classification which can automatically extract features, learn and classify them. It is a common belief that CNN can always perform better than other well-known classifiers. However, there is no systematic study which shows that automatic feature extraction in CNN is any better than other simple feature extraction techniques, and there is no study which shows that other simple neural network architectures cannot achieve same accuracy as CNN. In this paper, a systematic study to investigate CNN's feature extraction is presented. CNN with automatic feature extraction is firstly evaluated on a number of benchmark datasets and then a simple traditional Multi-Layer Perceptron (MLP) with full image, and manual feature extraction are evaluated on the same benchmark datasets. The purpose is to see whether feature extraction in CNN performs any better than a simple feature with MLP and full image with MLP. Many experiments were systematically conducted by varying number of epochs and hidden neurons. The experimental results revealed that traditional MLP with suitable parameters can perform as good as CNN or better in certain cases
- Research Article
82
- 10.1016/j.jrmge.2021.09.004
- Dec 1, 2021
- Journal of Rock Mechanics and Geotechnical Engineering
Tunnel boring machine vibration-based deep learning for the ground identification of working faces
- Research Article
70
- 10.1016/j.measurement.2018.05.003
- May 9, 2018
- Measurement
A retinal vessel detection approach using convolution neural network with reinforcement sample learning strategy
- Research Article
40
- 10.1016/j.foodcont.2022.109291
- Aug 4, 2022
- Food Control
Identification of slightly sprouted wheat kernels using hyperspectral imaging technology and different deep convolutional neural networks
- Research Article
20
- 10.1109/lgrs.2020.3020098
- Aug 8, 2020
- IEEE Geoscience and Remote Sensing Letters
Identifying species of trees in aerial images is essential for land-use classification, plantation monitoring, and impact assessment of natural disasters. The manual identification of trees in aerial images is tedious, costly, and error-prone, so automatic classification methods are necessary. Convolutional Neural Network (CNN) models have well succeeded in image classification applications from different domains. However, CNN models usually require intensive manual annotation to create large training sets. One may conceptually divide a CNN into convolutional layers for feature extraction and fully connected layers for feature space reduction and classification. We present a method that needs a minimal set of user-selected images to train the CNN's feature extractor, reducing the number of required images to train the fully connected layers. The method learns the filters of each convolutional layer from user-drawn markers in image regions that discriminate classes, allowing better user control and understanding of the training process. It does not rely on optimization based on backpropagation, and we demonstrate its advantages on the binary classification of coconut-tree aerial images against one of the most popular CNN models.
- Conference Article
7
- 10.1109/ddcls49620.2020.9275264
- Nov 20, 2020
This paper focus on the fault diagnosis problem for the compound faults of rotating machine, in which the rolling bearing and the sun gear faults simultaneously occurred are considered as the compound fault. Considering the traditional compound fault diagnosis methods usually utilize the manual fault features extraction, which are mainly dependent on engineering experience, we propose a compound fault diagnosis method named multi-sensor based convolutional neural network (MCNN). For vibration signals of compound faults, the different transmission paths and the positions of the sensors means one part of the embedded single faults may have higher energy. The vibration signals collected from three sensors at different positions can help guarantee the completeness of the characteristics of the compound fault. Then, the multi-sensor signals are combined together and fused by the convolutional operation of the convolutional neural network (CNN) model. The CNN model, which can automatically extract features from the vibration signals and achieve classification, is used for fault extraction and fault recognition. The experiments are presented on the physical platform of power transmission, and the proposed fault diagnosis method can be verified with the satisfied performance.
- Abstract
- 10.1016/j.cvdhj.2022.07.007
- Aug 1, 2022
- Cardiovascular Digital Health Journal
A CONVOLUTIONAL NEURAL NETWORK FOR AUTOMATIC DISCRIMINATION OF PAUSE EPISODES DETECTED BY AN INSERTABLE CARDIAC MONITOR
- Ask R Discovery
- Chat PDF
AI summaries and top papers from 250M+ research sources.