A web‐based app for sunn pest species recognition (Hemiptera: Scutelleridae: Eurygaster ) using machine learning
Abstract Recent progress in machine learning, particularly in convolutional neural networks, has greatly improved insect pest identification. Eurygaster integriceps , a highly polymorphic true bug and major wheat pest across the Western Palearctic, remains difficult to distinguish from its congeners even for specialists. Misidentifications reduce monitoring accuracy and hinder cost‐effective pest control. To support reliable identification by non‐experts while maintaining high classification accuracy, we trained the MobileNetV2 model on a large dataset combining images captured under controlled conditions with various devices, including a budget smartphone and photographs taken in the wild. The trained model demonstrated high classification metrics in identifying Eurygaster species, with macro precision, recall and F1‐score values of 0.901, 0.930 and 0.912, respectively. Based on this model, we developed an open‐source, web‐based application with a microservice architecture, allowing automated species identification from user‐uploaded images. Publicly accessible at https://eurygaster.ru , this tool supports faster and more accurate field identification, helping improve pest management decisions and reduce economic losses caused by misidentification.
- Research Article
47
- 10.1002/aps3.11371
- Jun 1, 2020
- Applications in Plant Sciences
Plants meet machines: Prospects in machine learning for plant biology
- Research Article
- 10.52783/jisem.v10i13s.2001
- Feb 25, 2025
- Journal of Information Systems Engineering and Management
Recent advancements in deep learning (DL) have shown significant promise in enhancing diagnostic accuracy (ACC) in medical imaging. This study explores the applicationof Convolutional Neural Networks (CNN) and the MobileNet architecture, optimized with Particle Swarm Optimization (PSO), for the classification of chest X-ray images. Our findings reveal that the CNN achieved impressive classification metrics, with a precision (PER) of 0.94, recall (REC) of 1.00, and an F1-score (F1-s) of 0.97 for the control class. Similarly, for the COVID-19 class, the CNN exhibited a PER of 0.86 and a REC of 0.92, culminating in an F1-s of 0.89. The MobileNet model, prior to PSO optimization, showed remarkable PER and REC across all classes, with overall ACC reaching 0.95. Post-PSO, MobileNet retained an overall ACCof 0.95, with marginal adjustments in PER and REC values, indicating refined model performance. Notably, the control class’sPER improved to 0.99 after PSO, and the COVID-19 class sawan increase in REC to 0.98. These results underscore the potential of using sophisticated Machine Learning (ML) models to aid in the rapid and accurate diagnosis of pulmonary diseases. The high ACC and F1-ss suggest that both CNN and MobileNet models, particularly when enhanced by PSO, could serve as reliable tools in clinical settings, augmenting the capabilities of medical professionals in the interpretation of chest X-rays.
- Conference Article
1
- 10.12783/asc36/35816
- Sep 20, 2021
paper, we focus on exploring the relationship between weave patterns and their mechanical properties in woven fiber composites through Machine Learning. Specifically, we explore the interactions between woven architectures and in-plane stiffness properties through Deep Convolutional Neural Network (DCNN) and Generative Adversarial Network (GAN). Our research is important for exploring how woven composite’s pattern is related to its mechanical properties and accelerating woven composite design as well as optimization. We focus on two tasks: (1) Stiffness prediction: Predicting in-plane stiffness properties for given weave patterns. Our DCNN extracts high-level features through several convolutional and fully connected layers to determine the final predictions. (2) Weave pattern prediction: Predicting weave patterns for target stiffness properties, which can be treated as the reverse task of the first one. Due to many-to-one mapping between weave patterns and the composite properties, we utilize a Decoder Neural Network as our baseline model and compare its performance with GAN and Genetic Algorithm. We represent the weave patterns as 2D checkerboard models and use finite element analysis (FEA) to determine in-plane stiffness properties, which serve as input data for our ML framework. We show that: (1) for stiffness prediction, DCNN can predict stiffness values for a given weave pattern with relatively high accuracy (above 93%); (2) for weave pattern prediction, the GAN model gives the best prediction accuracy (above 92%) while Decoder Neural Network has the best time efficiency. HAOTIAN FENG
- Research Article
- 10.1158/1538-7445.sabcs22-p6-04-08
- Mar 1, 2023
- Cancer Research
Background: Neoadjuvant treatment of breast cancer has been shown to potentially reduce the extent and morbidity of subsequent surgery. Response to neoadjuvant therapy may also be prognostic; complete pathologic response (pCR) following neoadjuvant treatment is associated with improved long-term outcomes. pCR, defined as the absence of residual invasive cancer, is determined by evaluation of H&E-stained breast resections and regional lymph nodes following neoadjuvant treatment; however, pathologist assessment is subject to intra- and inter-reader variability. Here we report machine learning (ML)-based models to identify tissue regions and cell types in the tumor microenvironment (TME) of H&E-stained breast cancer specimens. Model predictions were used to derive tumor bed area, a key component of the residual cancer burden score (RCB) used to assess neoadjuvant-treatment pathological response. Methods: Convolutional neural network (CNN) models were trained using digitized H&E-stained whole slide images (WSIs) of 2700 neoadjuvant-treated breast cancer specimens (resections and biopsies) from 4 sources, and an additional 1100 breast cancer primary resections from TCGA. 229,901 pathologist annotations were used to train CNN models to segment tissue regions (cancer epithelium, stroma, diffuse inflammatory infiltrate, ductal carcinoma in situ, lymph nodes and necrosis) and cell types (cancer epithelial cells, fibroblasts, lymphocytes, macrophages, foamy macrophages and plasma cells) at single-pixel resolution. These tissue region segmentations were then used to derive tumor bed area using a convex hull algorithm. Each model was evaluated by board certified pathologists for performance. Model predictions of tumor bed area were evaluated in comparison to mean measurements from 3 pathologists for each of 22 held-out test slides. To further assess cell model performance, 5 pathologists exhaustively annotated 120 frames (300 x 300 pixels) on test samples from a dataset not used in model development (N=536; resections and biopsies) to produce consensus ground truth cell labels. Model predictions were compared with pathologist annotations in these frames using Pearson correlation, precision, recall, and F1 metrics. Only those classes with greater than 50 consensus cells identified were evaluated. Results: CNN predictions of tissue and cell classes within H&E breast cancer WSIs showed concordance with manual pathologist consensus labels. The weighted average Pearson correlation (across the relevant cell types) between the model and consensus was 0.75, comparable to the correlation of 0.81 between pathologists and consensus. Classification metrics for each cell class are reported in Table 1. Reduced performance of the model relative to the average pathologist performance may be due to heterogeneous slide characteristics and infrequency of some cell types in the data. For prediction of tumor bed area, CNN model predictions showed moderate correlation with pathologist consensus (Pearson r=0.65, 95% CI: 0.38-0.81). Conclusions: CNN model classification of cell types and tissue regions across entire H&E breast cancer WSIs shows concordance with pathologist consensus. Model predictions of tumor bed area also show concordance with pathologist assessment and can be used to derive the RCB score. These models can be reproducibly applied to quantify diverse histological features in large datasets, potentially enabling improved standardization and efficiency of pathologist evaluation of the breast cancer TME and neoadjuvant response. Classification Metrics for Individual Cell Classes Citation Format: Christian Kirkup, Sanjana Vasudevan, Filip Kos, Benjamin Trotter, Murray Resnick, Andrew H. Beck, Michael Montalto, Ilan Wapinski, Ben Glass, Mary Lin, Stephanie Hennek, Archit Khosla, Michael G. Drage, Laura Chambre. Machine learning-based characterization of the breast cancer tumor microenvironment for assessment of neoadjuvant-treatment response [abstract]. In: Proceedings of the 2022 San Antonio Breast Cancer Symposium; 2022 Dec 6-10; San Antonio, TX. Philadelphia (PA): AACR; Cancer Res 2023;83(5 Suppl):Abstract nr P6-04-08.
- Research Article
5
- 10.3390/electronics13152948
- Jul 26, 2024
- Electronics
As urbanization accelerates, the prevalence of fire incidents leads to significant hazards. Enhancing the accuracy of remote fire detection systems while reducing computation complexity and power consumption in edge hardware are crucial. Therefore, this paper investigates an innovative lightweight Convolutional Spiking Neural Network (CSNN) method for fire detection based on acoustics. In this model, Poisson encoder and convolution encoder strategies are considered and compared. Additionally, the study investigates the impact of observation time steps, surrogate gradient functions, and the threshold and decay rate of membrane potential on network performance. A comparison is made between the classification metrics of the traditional Convolutional Neural Network (CNN) approaches and the proposed lightweight CSNN method. To assess the generalization performance of the proposed lightweight method, publicly available datasets are merged with our experimental data for training, which results in a high accuracy of 99.02%, a precision of 99.37%, a recall of 98.75%, and an F1 score of 99.06% on the test datasets.
- Research Article
- 10.7717/peerj-cs.2994
- Jul 3, 2025
- PeerJ Computer Science
Breast cancer is one of the leading causes of death among women worldwide. Early detection plays a crucial role in reducing mortality rates. While mammography is a widely used diagnostic tool, computed tomography (CT) scans are increasingly being explored for detecting breast cancer due to their high-resolution imaging and ability to visualize tissue in 3D. Despite the potential of CT scans in visualizing breast tissue in 3D with high resolution, extracting meaningful patterns from these scans is difficult due to the complex and nonlinear nature of the tissue features. The challenge lies in developing computational methods that can accurately detect and localize breast cancer lesions, especially when the tumors vary in size, shape, and density. In this article, we proposed a framework called convolutional neural bidirectional feature pyramid network, which integrates multi-scale feature extraction and bidirectional feature fusion for breast cancer detection in CT scans. The proposed framework classified the images into diseased and non-diseased and then identified the infected region on breast tissue. Using convolutional neural networks, we defined several layers to classify the diseased and normal CT scan images. We collected data on breast CT scans taken from the radiology department, Ayub Teaching Hospital Abbottabad, Pakistan. We evaluated the model using a variety of classification metrics such as precision, recall, F1-measure, and average precision to determine its effectiveness in finding breast cancer lesions, and we found 96.11% accuracy. Our findings show that compared with current state-of-the-art methods, the proposed framework has satisfactory results in identifying breast cancer areas, and the proposed framework over the baselines has achieved a 1.71% improvement.
- Research Article
- 10.3390/ai6120312
- Nov 28, 2025
- AI
Electroencephalography (EEG) provides excellent temporal resolution for brain activity analysis but limited spatial resolution at the sensors, making source unmixing essential. Our objective is to enable accurate brain activity analysis from EEG by providing a fast, calibration-free alternative to independent component analysis (ICA) that preserves ICA-like component interpretability for real-time and large-scale use. We introduce a convolutional neural network (CNN) that estimates ICA-like component activations and scalp topographies directly from short, preprocessed EEG epochs, enabling real-time and large-scale analysis. EEG data were acquired from 44 participants during a 40-min lecture on image processing and preprocessed using standard EEGLAB procedures. The CNN was trained to estimate ICA-like components and evaluated against ICA using waveform morphology, spectral characteristics, and scalp topographies. We term the approach “adaptive” because, at test time, it is calibration-free and remains robust to user/session variability, device/montage perturbations, and within-session drift via per-epoch normalization and automated channel quality masking. No online weight updates are performed; robustness arises from these inference-time mechanisms and multi-subject training. The proposed method achieved an average F1-score of 94.9%, precision of 92.9%, recall of 97.2%, and overall accuracy of 93.2%. Moreover, mean processing time per subject was reduced from 332.73 s with ICA to 4.86 s using the CNN, a ~68× improvement. While our primary endpoint is ICA-like decomposition fidelity (waveform, spectral, and scalp-map agreement), the clean/artifact classification metrics are reported only as a downstream utility check confirming that the CNN-ICA outputs remain practically useful for routine quality control. These results show that CNN-based EEG decomposition provides a practical and accurate alternative to ICA, delivering substantial computational gains while preserving signal fidelity and making ICA-like decomposition feasible for real-time and large-scale brain activity analysis in clinical, educational, and research contexts.
- Research Article
32
- 10.1016/j.cmpb.2022.107262
- Nov 26, 2022
- Computer Methods and Programs in Biomedicine
Rapid diagnosis of Covid-19 infections by a progressively growing GAN and CNN optimisation
- Research Article
1
- 10.1149/ma2021-01541317mtgabs
- May 30, 2021
- Electrochemical Society Meeting Abstracts
The electronic nose is a gas detection instrument using the bionic olfactory mechanism, which usually consists of the gas sensor array and the gas classification algorithm. Furthermore, the capability of the gas classification algorithm is critical to the reliable accuracy of gas recognition. The process of gas classification usually involves pattern recognition of multiple time-related gas sensor response curves. Traditional gas classification algorithms are mainly machine learning methods, such as PCA, LDA, ICA, SVM, KNN, etc. These algorithms are relatively cumbersome because we need to extract handcrafted features before using them. Deep learning also has applications in electronic nose gas classification algorithms[1-4] which improve the accuracy of classification result, but most of deep neural networks have complex structures and consume huge computing resources. The spiking neural network is the third generation of artificial neural network, and its spiking neuron model which is more bionic than previous artificial neurons can process spike sequence signals[5]. The spiking neural network model has a simple structure with higher computational efficiency, and takes up less computational resources. Moreover, its time attribute makes it more suitable for processing information about time series. In order to simultaneously take advantage of the efficient feature extraction of the convolutional neural network and the high computational efficiency of the spiking neural network, our team converted the convolutional neural network into a convolutional spiking neural network(CSNN) and applied it to gas classification. The activation function layer in the traditional convolutional layer was replaced with the spiking neuron layer which used the IF or LIF spiking neuron model to transform the continuous values passed by the convolutional later into discrete values so as to achieve the transmission of spikes between layers. The first convolutional spiking layer was used as a spiking encoder, so the spiking encoding method such as Gaussian encoding was not used. The spike-firing-frequency output by the last layer of neurons was calculated to obtain the classification result. The probability that a gas sample belonged to a certain class was proportional to the spike-firing-frequency of the corresponding neuron of the class. Our team built a convolutional spiking neural network model with 9 layers of convolutional spiking layer and 2 layers of fully connected spiking layer, and used the food spoilage gas dataset collected by us and open source gas mixtures dataset[6] to evaluate the capability of our model. With regard to the gas mixtures dataset, ethylene, methane, CO and their mixed state need to be classified. After training, CSNN achieved the test accuracy of 92.6%, and the other algorithms’ test accuracy were 92.9% of ResNet-18, 91.2% of one-dimensional deep convolutional neural network(1D-DCNN) and 88.5% of SVM. As for the food spoilage gas dataset, 30 types of spoiled meat, vegetables, fruits and their mixed state samples were measured. The first task was to classify the major categories of spoiled food, furtuer, the 30 types of spoiled food odor samples were going to be divided into 4 categories: fresh food, spoiled meat, spoiled vegetables and spoiled fruits. After training, CSNN achieved the test accuracy of 81.4% which had a certain accuracy improvement comparing with 80.6% of ResNet-18, 80.1% of 1D-DCNN and 77.3% of SVM. The second task was to classify the subcategories of spoiled fruits, that is, 8 classes of spoiled fruit odor samples should be classified. After training, CSNN could achieve high test accuracy of 90.7%, and the accuracy of other algorithms was 88.8% of ResNet-18, 87.1% of 1D-DCNN and 77.9% of PCA+ANN. The CSNN output of a spoiled watermelon sample is shown in Figure 1.In conclusion, CSNN had similar odor classification performance to ResNet-18, but the computing resources occupied by CSNN was only 1/5 of ResNet-18. This research shows that the spiking neural network has the advantages of high odor classification accuracy, great calculation efficiency and occupying few computing resources. It is suitable as a gas classification algorithm of electronic nose and for further development.
- Research Article
- 10.17485/ijst/v17i45.2728
- Dec 14, 2024
- Indian Journal Of Science And Technology
Objectives: To evaluate the efficiency of task prediction and resource allocation for load balancing (LB) in the cloud environment using the combined approach like random Forest(RF) for task prediction and Particle Swarm optimization for optimization and Convolutional Neural Networks (PSO-CNN) for resource prediction and allocation. Methods: The ensemble approach in the present study uses Random Forest (RF), a machine learning (ML) model for task prediction and Particle Swarm Optimization (PSO+CNN), a bio-inspired algorithm and Deep Learning (DL) model for optimization and resource allocation. The study employs PSO techniques to optimize CNN in order to address the investigation of algorithmic optimization in DL. The results show that the suggested model outperforms the other models like CNN-LSTM(Long Short-term memory), CNN-GRU(Gated Recurrent Unit), and PSO –SVM(Support Vector Machine) to increase the performance and efficacy of the cloud systems. The experiment is implemented using Python and assessed using Google Cluster dataset that is accessible to the public. Findings: The use of ML and DL techniques are found to be more efficient in cloud infrastructure than the conventional methods. The study examines the performance of the RF, PSO and CNN and the hybrid RF-PSO-CNN models. The accuracy, precision, and F1. Score metrics were used to assess the performance of the classification models. The recommended model RF-PSO-CNN outperforms them with an accuracy of 90% than the contrasted methods like CNN-LSTM, CNN- GRU and PSO-SVM. As a result, both the classification assessment metrics and resource consumption show that the proposed model performs effectively. Novelty: The novel ensemble approach suggests the combined RF-PSO-CNN for LB in cloud Computing. The task predicted by RF is assigned to the resource chosen by PSO and CNN, thereby improving the efficiency of task prediction and resource allocation. Most of the research uses any two ML or DL methods for either predicting the tasks to be scheduled or which resource to allocate. The study uses a combination of the ML (RF) method, bio-inspired algorithm (PSO) and a DL (CNN) model for both task and resource prediction concurrently and it examines the effectiveness of LB in the cloud context. Keywords: Load Balancing (LB), Task scheduling, Resource allocation, Random Forest (RF), Convolutional Neural Networks (CNN), Particle Swarm Optimization (PSO)
- Research Article
- 10.1182/blood-2024-211964
- Nov 5, 2024
- Blood
Systematic Review of Machine Learning Models for Myelodysplastic Syndrome Diagnosis
- Research Article
40
- 10.3390/rs15030798
- Jan 31, 2023
- Remote Sensing
Landslide is a natural disaster that seriously affects human life and social development. In this study, the characteristics and effectiveness of convolutional neural network (CNN) and conventional machine learning (ML) methods in a landslide susceptibility assessment (LSA) are compared. Six ML methods used in this study are Adaboost, multilayer perceptron neural network (MLP-NN), random forest (RF), naive Bayes, decision tree (DT), and gradient boosting decision tree (GBDT). First, the basic knowledge and structures of the CNN and ML methods, and the steps of the LSA are introduced. Then, 11 conditioning factors in three categories in the Hongxi River Basin, Pingwu County, Mianyang City, Sichuan Province are chosen to build the train, validation, and test samples. The CNN and ML models are constructed based on these samples. For comparison, indicator methods, statistical methods, and landslide susceptibility maps (LSMs) are used. The result shows that the CNN can obtain the highest accuracy (86.41%) and the highest AUC (0.9249) in the LSA. The statistical methods represented by the mean and variance of TP and TN perform more firmly on the possibility of landslide occurrence. Furthermore, the LSMs show that all models can successfully identify most of the landslide points, but for areas with a low frequency of landslides, some models are insufficient. The CNN model demonstrates better results in the recognition of the landslides’ cluster region, this is also related to the convolution operation that takes the surrounding environment information into account. The higher accuracy and more concentrative possibility of CNN in LSA is of great significance for disaster prevention and mitigation, which can help the efficient use of human and material resources. Although CNN performs better than other methods, there are still some limitations, the identification of low-cluster landside areas can be enhanced by improving the CNN model.
- Research Article
5
- 10.1038/s41598-022-14140-x
- Jun 15, 2022
- Scientific Reports
In a previous study, we identified biocular asymmetries in fundus photographs, and macula was discriminative area to distinguish left and right fundus images with > 99.9% accuracy. The purposes of this study were to investigate whether optical coherence tomography (OCT) images of the left and right eyes could be discriminated by convolutional neural networks (CNNs) and to support the previous result. We used a total of 129,546 OCT images. CNNs identified right and left horizontal images with high accuracy (99.50%). Even after flipping the left images, all of the CNNs were capable of discriminating them (DenseNet121: 90.33%, ResNet50: 88.20%, VGG19: 92.68%). The classification accuracy results were similar for the right and left flipped images (90.24% vs. 90.33%, respectively; p = 0.756). The CNNs also differentiated right and left vertical images (86.57%). In all cases, the discriminatory ability of the CNNs yielded a significant p value (< 0.001). However, the CNNs could not well-discriminate right horizontal images (50.82%, p = 0.548). There was a significant difference in identification accuracy between right and left horizontal and vertical OCT images and between flipped and non-flipped images. As this could result in bias in machine learning, care should be taken when flipping images.
- Research Article
11
- 10.1145/3575798
- Apr 20, 2023
- ACM Transactions on Embedded Computing Systems
Recently, automated co-design of machine learning (ML) models and accelerator architectures has attracted significant attention from both the industry and academia. However, most co-design frameworks either explore a limited search space or employ suboptimal exploration techniques for simultaneous design decision investigations of the ML model and the accelerator. Furthermore, training the ML model and simulating the accelerator performance is computationally expensive. To address these limitations, this work proposes a novel neural architecture and hardware accelerator co-design framework, called CODEBench. It comprises two new benchmarking sub-frameworks, CNNBench and AccelBench, which explore expanded design spaces of convolutional neural networks (CNNs) and CNN accelerators. CNNBench leverages an advanced search technique, Bayesian Optimization using Second-order Gradients and Heteroscedastic Surrogate Model for Neural Architecture Search, to efficiently train a neural heteroscedastic surrogate model to converge to an optimal CNN architecture by employing second-order gradients. AccelBench performs cycle-accurate simulations for diverse accelerator architectures in a vast design space. With the proposed co-design method, called Bayesian Optimization using Second-order Gradients and Heteroscedastic Surrogate Model for Co-Design of CNNs and Accelerators, our best CNN–accelerator pair achieves 1.4% higher accuracy on the CIFAR-10 dataset compared to the state-of-the-art pair while enabling 59.1% lower latency and 60.8% lower energy consumption. On the ImageNet dataset, it achieves 3.7% higher Top1 accuracy at 43.8% lower latency and 11.2% lower energy consumption. CODEBench outperforms the state-of-the-art framework, i.e., Auto-NBA, by achieving 1.5% higher accuracy and 34.7× higher throughput while enabling 11.0× lower energy-delay product and 4.0× lower chip area on CIFAR-10.
- Conference Article
3
- 10.1109/impact47228.2019.9024985
- Oct 1, 2019
Defect inspection (to detect, classify, measure and analyze) has long been a challenging task in semiconductor manufacturing (MFG) domain. This paper discusses a new Machine Learning (ML) approach which can be used to assist defect inspection in different MFG scenarios. Associated solutions have been developed and applied to the surface defect inspection for substrate components, used in our IC packaging MFG.During the past decade, the help from image-based Automated Optical Inspection (AOI) equipment has significantly reduced manual efforts in substrate/PCB defect examination, but is still insufficient in defect classification automation. Recently, the adoption of ML and Convolution Neural Network (CNN) based Deep Learning technologies raised much hoping to advance the defect classification automation to a new level that acceptable for rigid MFG practices. Most of the published CNN models, however, tend to use large number of learning parameters (floating-point variables) during computing in order to gain high image recognition accuracy. The parameters’ massive blow-up often causes heavy power-gobbling computation. While applying CNN to our substrate defect datasets, each of which contains hundreds of thousands of high-resolution images, collected across different MFG processes and produced products, a training run could take days or even weeks to finish. Also, more the learning variables for training are used, longer the inference time will be. The situations make these complex CNN models hard to meet our overnight retraining and in-line inference time constraint. Therefore, to develop a more efficient ML method is strongly preferred in our MFG environment.In this paper, we develop a feature-spanning ML approach which takes the feature of an image as a base. Through mathematical transformations the base is spanned to fit into an accumulated and gradually divided feature-of-classes space. During the feature-spanning process, the initialization of parameters is tightly controlled, which means the redundancy blow-up is carefully calculated to optimize the usage of computation resource.To demonstrate the advantage of our ML method, the public dataset CIFAR-10 is used for benchmarking. CIFAR-10 dataset contains sufficient diversity and is small enough to have quick observations on computing performance. Our results indicate that the parameters used by our method are only 5.6% of ResNet-101’s. We also apply this new ML technology to the inspection of several substrate defect types. In particular, defects on solder mask are hard to be recognized because their pattern’s colors are quite similar to the substrate background. The benchmarking shows very competitive results as compared with other CNNs’. Further benchmarking shows our ML method holds high degree of shift invariance property, implying that our method can help to resist MFG condition changes during the defect inspection operation.Our ML approach requires much less learning variables, and thus can achieve very fast training and inference speed. While adopted in MFG production lines, it helps to reduce computing cost/energy and comply with Green Factory policy. Besides, our ML technology has high scalability, capable of performing heterogeneous learning on data combined from different aspects, such as image plus CAM design reference, Z-height or time-series signal waveform. For continuous development efforts, we are making the ML behavior in our AI-for-MFG applications more explainable, controllable, and be self-adaptive to changes in MFG environment.
- Ask R Discovery
- Chat PDF
AI summaries and top papers from 250M+ research sources.