Deep Learning and Generative AI for Monolithic and Chiplet SoC Design and Verification: A Survey
Deep Learning and Generative AI for Monolithic and Chiplet SoC Design and Verification: A Survey
- Research Article
34
- 10.1155/2022/4254631
- Jul 14, 2022
- Computational Intelligence and Neuroscience
COVID-19 detection and classification using chest X-ray images is a current hot research topic based on the important application known as medical image analysis. To halt the spread of COVID-19, it is critical to identify the infection as soon as possible. Due to time constraints and the expertise of radiologists, manually diagnosing this infection from chest X-ray images is a difficult and time-consuming process. Artificial intelligence techniques have had a significant impact on medical image analysis and have also introduced several techniques for COVID-19 diagnosis. Deep learning and explainable AI have shown significant popularity among AL techniques for COVID-19 detection and classification. In this work, we propose a deep learning and explainable AI technique for the diagnosis and classification of COVID-19 using chest X-ray images. Initially, a hybrid contrast enhancement technique is proposed and applied to the original images that are later utilized for the training of two modified deep learning models. The deep transfer learning concept is selected for the training of pretrained modified models that are later employed for feature extraction. Features of both deep models are fused using improved canonical correlation analysis that is further optimized using a hybrid algorithm named Whale-Elephant Herding. Through this algorithm, the best features are selected and classified using an extreme learning machine (ELM). Moreover, the modified deep models are utilized for Grad-CAM visualization. The experimental process was conducted on three publicly available datasets and achieved accuracies of 99.1, 98.2, and 96.7%, respectively. Moreover, the ablation study was performed and showed that the proposed accuracy is better than the other methods.
- Research Article
2
- 10.1080/09540091.2024.2445249
- Jan 2, 2025
- Connection Science
This study aims to benchmark the performance of machine learning (ML), deep learning (DL), and generative AI (GenAI) models in categorising assessment questions based on Bloom’s Taxonomy. Previous studies have lacked comprehensive investigations into the performance of these approaches. Further, the GenAI remains unexplored, offering a promising avenue for groundbreaking explorations. Therefore, we explore the effectiveness of various ML models by incorporating domain-specific term weighting and utilising word embeddings. The study also analyses the performance of Recurrent Neural Networks (RNNs) and Convolutional Neural Network (CNN) with and without bidirectional connections, as well as an approach that combines RNNs and CNN. Furthermore, we evaluate several transformer-based models by fine-tuning them alongside GenAI models text-davinci-003, gpt-3.5-turbo, PaLM2, and Gemini Pro in zero-shot classification settings. The results demonstrate that ML models outperformed DL models, achieving a best accuracy of 0.871 and F1 score of 0.872. Additionally, domain-specific term weighting is found to be superior to word embeddings. Furthermore, most ML and DL models performed better than GenAI models, with GenAI models achieving a best accuracy of 0.618 and a best F1 score of 0.627. Therefore, the outcome suggests considering the ML models with domain-specific term weighting as benchmark models in future research.
- Research Article
6
- 10.1007/s10916-024-02037-3
- Feb 23, 2024
- Journal of Medical Systems
Precise neurosurgical guidance is critical for successful brain surgeries and plays a vital role in all phases of image-guided neurosurgery (IGN). Neuronavigation software enables real-time tracking of surgical tools, ensuring their presentation with high precision in relation to a virtual patient model. Therefore, this work focuses on the development of a novel multimodal IGN system, leveraging deep learning and explainable AI to enhance brain tumor surgery outcomes. The study establishes the clinical and technical requirements of the system for brain tumor surgeries. NeuroIGN adopts a modular architecture, including brain tumor segmentation, patient registration, and explainable output prediction, and integrates open-source packages into an interactive neuronavigational display. The NeuroIGN system components underwent validation and evaluation in both laboratory and simulated operating room (OR) settings. Experimental results demonstrated its accuracy in tumor segmentation and the success of ExplainAI in increasing the trust of medical professionals in deep learning. The proposed system was successfully assembled and set up within 11min in a pre-clinical OR setting with a tracking accuracy of 0.5 (± 0.1) mm. NeuroIGN was also evaluated as highly useful, with a high frame rate (19 FPS) and real-time ultrasound imaging capabilities. In conclusion, this paper describes not only the development of an open-source multimodal IGN system but also demonstrates the innovative application of deep learning and explainable AI algorithms in enhancing neuronavigation for brain tumor surgeries. By seamlessly integrating pre- and intra-operative patient image data with cutting-edge interventional devices, our experiments underscore the potential for deep learning models to improve the surgical treatment of brain tumors and long-term post-operative outcomes.
- Research Article
- 10.1371/journal.pone.0324957
- Jul 17, 2025
- PloS one
Pneumonia, a severe lung infection caused by various viruses, presents significant challenges in diagnosis and treatment due to its similarities with other respiratory conditions. Additionally, the need to protect patient privacy complicates the sharing of sensitive clinical data. This study introduces FLPneXAINet, an effective framework that combines federated learning (FL) with deep learning (DL) and explainable AI (XAI) to securely and accurately predict pneumonia using chest X-ray (CXR) images. We utilized a benchmark dataset from Kaggle, comprising 8,402 CXR images (3,904 normal and 4,498 pneumonia). The dataset was preprocessed and augmented using a cycle-consistent generative adversarial (CycleGAN) network to increase the volume of training data. Three pre-trained DL models named VGG16, NASNetMobile, and MobileNet were employed to extract features from the augmented dataset. Further, four ensemble DL (EDL) models were used to enhance feature extraction. Feature optimization was performed using recursive feature elimination (RFE), analysis of variance (ANOVA), and random forest (RF) to select the most relevant features. These optimized features were then inputted into machine learning (ML) models, including K-nearest neighbor (KNN), naive bayes (NB), support vector machine (SVM), and RF, for pneumonia prediction. The performance of the models was evaluated in a FL environment, with the EDL network achieving the best results: accuracy 97.61%, F1 score 98.36%, recall 98.13%, and precision 98.59%. The framework's predictions were further validated using two XAI techniques-Local Interpretable Model-Agnostic Explanations (LIME) and Grad-CAM. FLPneXAINet offers a robust solution for healthcare professionals to accurately diagnose pneumonia, ensuring timely treatment while safeguarding patient privacy.
- Research Article
30
- 10.3389/fonc.2023.1151257
- Jun 6, 2023
- Frontiers in Oncology
Skin cancer is a serious disease that affects people all over the world. Melanoma is an aggressive form of skin cancer, and early detection can significantly reduce human mortality. In the United States, approximately 97,610 new cases of melanoma will be diagnosed in 2023. However, challenges such as lesion irregularities, low-contrast lesions, intraclass color similarity, redundant features, and imbalanced datasets make improved recognition accuracy using computerized techniques extremely difficult. This work presented a new framework for skin lesion recognition using data augmentation, deep learning, and explainable artificial intelligence. In the proposed framework, data augmentation is performed at the initial step to increase the dataset size, and then two pretrained deep learning models are employed. Both models have been fine-tuned and trained using deep transfer learning. Both models (Xception and ShuffleNet) utilize the global average pooling layer for deep feature extraction. The analysis of this step shows that some important information is missing; therefore, we performed the fusion. After the fusion process, the computational time was increased; therefore, we developed an improved Butterfly Optimization Algorithm. Using this algorithm, only the best features are selected and classified using machine learning classifiers. In addition, a GradCAM-based visualization is performed to analyze the important region in the image. Two publicly available datasets-ISIC2018 and HAM10000-have been utilized and obtained improved accuracy of 99.3% and 91.5%, respectively. Comparing the proposed framework accuracy with state-of-the-art methods reveals improved and less computational time.
- Research Article
- 10.32996/jbms.2025.7.6.4
- Oct 25, 2025
- Journal of Business and Management Studies
The work examines how Deep Learning (DL) and Generative Artificial Intelligence (GenAI) can be strategically incorporated into the industrial manufacturing of the US to accelerate the process of product prototyping and improve levels of compliance with export control regulations and intellectual property (IP) strategy. With the manufacturing sector swiftly adopting the concept of digital transformation within the Industry 4.0 framework, the concepts of DL and GenAI technologies are reshaping the old forms of work processes, including automating the design-iteration process, cutting the production latency, and improving the process of innovation management. Nonetheless, their fast usage creates new issues with export-controlled technologies and ownership of IP among the outputs of an algorithm. The current paper utilizes a mixed-method design integrating model simulations based on data, case study, and policy framework analysis. Results indicate that GenAI-based prototyping has the potential to cut the design cycle by up to 40 percent while ensuring regulatory compliance by the incorporation of embedded model governance. Moreover, predictive maintenance accuracy can be increased with the help of DL, and patentable innovations can be facilitated using automated differentiation in the design. The paper also establishes new gaps in the policies regarding dual-use AI applications. It prescribes a systemized framework for synchronizing AI innovation with export control compliance and IP protection policies. The findings can be helpful to policy makers, industrial executives, and R&D strategists who want to use generative and deep learning systems responsibly in the US manufacturing environment.
- Research Article
1
- 10.3390/biology14101313
- Sep 23, 2025
- Biology
The current study presents a multi-class, image-based classification of eight morphologically similar macroscopic Earthstar fungal species (Astraeus hygrometricus, Geastrum coronatum, G. elegans, G. fimbriatum, G. quadrifidum, G. rufescens, G. triplex, and Myriostoma coliforme) using deep learning and explainable artificial intelligence (XAI) techniques. For the first time in the literature, these species are evaluated together, providing a highly challenging dataset due to significant visual overlap. Eight different convolutional neural network (CNN) and transformer-based architectures were employed, including EfficientNetV2-M, DenseNet121, MaxViT-S, DeiT, RegNetY-8GF, MobileNetV3, EfficientNet-B3, and MnasNet. The accuracy scores of these models ranged from 86.16% to 96.23%, with EfficientNet-B3 achieving the best individual performance. To enhance interpretability, Grad-CAM and Score-CAM methods were utilised to visualise the rationale behind each classification decision. A key novelty of this study is the design of two hybrid ensemble models: EfficientNet-B3 + DeiT and DenseNet121 + MaxViT-S. These ensembles further improved classification stability, reaching 93.71% and 93.08% accuracy, respectively. Based on metric-based evaluation, the EfficientNet-B3 + DeiT model delivered the most balanced performance, with 93.83% precision, 93.72% recall, 93.73% F1-score, 99.10% specificity, a log loss of 0.2292, and an MCC of 0.9282. Moreover, this modeling approach holds potential for monitoring symbiotic fungal species in agricultural ecosystems and supporting sustainable production strategies. This research contributes to the literature by introducing a novel framework that simultaneously emphasises classification accuracy and model interpretability in fungal taxonomy. The proposed method successfully classified morphologically similar puffball species with high accuracy, while explainable AI techniques revealed biologically meaningful insights. All evaluation metrics were computed exclusively on a 10% independent test set that was entirely separate from the training and validation phases. Future work will focus on expanding the dataset with samples from diverse ecological regions and testing the method under field conditions.
- Book Chapter
4
- 10.4018/979-8-3693-0876-9.ch016
- Oct 18, 2023
Deep reinforcement learning (DRL) is one of the emerging areas of machine learning which focuses on maximized rewards. DRL is a type of machine learning that combines reinforcement learning and deep learning. It uses a series of algorithms to enable an agent to learn how to make decisions in a complex environment. DRL is a subset of artificial intelligence that focuses on making decisions based on the environment and the rewards associated with each action.The goal of DRL is to maximize the long-term reward of an agent. In order to do this, the agent must use a combination of deep learning, reinforcement learning and other AI techniques to learn which actions will lead to the highest reward. DRL is used to solve a variety of problems, from playing video games to controlling robots. It is also used in autonomous driving and robotics, as well as for financial trading. DRL is a powerful tool for solving complex problems and has been used in a variety of research projects. DRL has the potential to revolutionize the way we interact with machines and the environment.
- Conference Article
10
- 10.1109/eecs.2017.10
- Nov 1, 2017
In this paper, we introduce a distributed deep learning platform, BAIPAS, Big Data and AI based Predication and Analysis System. In the case of deep learning using big data, it takes much time to train with data. To reduce training time, there is a method that uses distributed deep learning. When big data exists in external storage, training takes a long time because it takes a lot of network I/O time when data is loaded during deep learning operations. We propose data locality management as a way to reduce training time with big data. BAIPAS is a distributed deep learning platform that aims to provide quick learning from big data, easy installation and monitoring of the platform, and convenience for developers of deep learning models. In order to provide fast training using big data, data is distributed and stored in worker-server storage using a data locality and shuffling, and then training is performed. The data locality manager analyzes the training data and the state information of the worker servers. This distributes the data scheduling according to the available storage space of the worker server and the learning performance of the worker server. However, if each worker server conducts deep learning using the distributed training data, model overfitting may occur as compared with the method of learning with the full training data set. To solve this problem, we applied a shuffling method that moves already learned data to another worker server when training is performed. Thereby, each worker server can contain the full training data set. BAIPAS uses Kubernetes and Docker to provide easy installation and monitoring of the platform. It also provides pre-processing modules, management tools, automation of cluster creation, resource monitoring, and other resources; so developers can easily develop deep learning models.
- Research Article
- 10.1088/1757-899x/1293/1/012022
- Nov 1, 2023
- IOP Conference Series: Materials Science and Engineering
Numerical modelling of adhesive composites in wind energy is complicated in part due to material heterogeneity. Microstructural CT scan fibre composite patterns or representative elements, which play a major role in defining the mechanical behaviour of these adhesive structures, are both difficult to characterize as well as hard to numerically simulate. With advances in deep learning based generative AI, new ways of predicting the mechanical behaviour of heterogeneous materials is now possible. Here we put forward a data driven method to relate input composite adhesive microstructures with field data using deep learning and generative AI based methods. The prediction of mechanical stress or strain fields or other similar patterns and combining them as a function of boundary conditions, fibre composite microstructure and material models is achieved and the models are trained such that they closely approximate computationally expensive simulations based on numerical FE techniques and would have the ability to generalize. We also create a dataset of wind energy adhesives with their numerical mechanics based FE simulations subject to different boundary conditions and material models for further deep learning based composite studies.
- Research Article
5
- 10.1186/s12889-025-22705-4
- Apr 24, 2025
- BMC Public Health
BackgroundUnderstanding the complex interplay between life course exposures, such as adverse childhood experiences and environmental factors, and disease risk is essential for developing effective public health interventions. Traditional epidemiological methods, such as regression models and risk scoring, are limited in their ability to capture the non-linear and temporally dynamic nature of these relationships. Deep learning (DL) and explainable artificial intelligence (XAI) are increasingly applied within healthcare settings to identify influential risk factors and enable personalised interventions. However, significant gaps remain in understanding their utility and limitations, especially for sparse longitudinal life course data and how the influential patterns identified using explainability are linked to underlying causal mechanisms.MethodsWe conducted a controlled simulation study to assess the performance of various state-of-the-art DL architectures including CNNs and (attention-based) RNNs against XGBoost and logistic regression. Input data was simulated to reflect a generic and generalisable scenario with different rules used to generate multiple realistic outcomes based upon epidemiological concepts. Multiple metrics were used to assess model performance in the presence of class imbalance and SHAP values were calculated.ResultsWe find that DL methods can accurately detect dynamic relationships that baseline linear models and tree-based methods cannot. However, there is no one model that consistently outperforms the others across all scenarios. We further identify the superior performance of DL models in handling sparse feature availability over time compared to traditional machine learning approaches. Additionally, we examine the interpretability provided by SHAP values, demonstrating that these explanations often misalign with causal relationships, despite excellent predictive and calibrative performance.ConclusionsThese insights provide a foundation for future research applying DL and XAI to life course data, highlighting the challenges associated with sparse healthcare data, and the critical need for advancing interpretability frameworks in personalised public health.
- Conference Article
- 10.52842/conf.sigradi.2024.0037
- Jan 1, 2024
Architecture's future in Europe: Deep learning and generative AI
- Conference Article
1
- 10.1109/icicnis64247.2024.10823172
- Dec 17, 2024
Renewable Energy Consumption on Solar and Wind Energy Prediction Using Deep Learning and Generative AI
- Research Article
- 10.48175/ijarsct-29926
- Nov 17, 2025
- International Journal of Advanced Research in Science, Communication and Technology
Mental health concerns affect individuals across all ages, yet many people hesitate to seek help due to stigma, limited access to professionals, or fear of judgement. Although AI-driven chatbots offer a convenient way to provide support, most systems rely only on text, making their emotional understanding narrow and often inaccurate. This dissertation proposes a Multimodal Emotion-Aware Conversational Agent (MEACA) that interprets emotions using three complementary modalities—text, facial cues, and physiological signals. Text understanding is handled using transformer-based language models; facial emotions are detected with Vision Transformers; and physiological signals are interpreted using BiLSTM architectures. A cross-attention fusion layer integrates these signals, and a generative model produces emotionally aligned responses. Experiments on datasets like GoEmotions, AffectNet, and K-EmoCon demonstrate improved emotion recognition and more empathetic interactions. The model aims to offer a practical, accessible tool that can support mental health care more effectively than text-only systems
- Research Article
- 10.4103/ojmpc_20243001_1
- Jan 1, 2024
- Orthopaedic Journal of Madhya Pradesh Chapter
New technology in orthopedics, leading to innovative solutions and improved patient outcomes. Various cutting-edge technologies are revolutionizing orthopedic surgery ie. Smart Implants and Wearable Technology, 3D printing, telehealth, Artificial Intelligence, Digital Templating, Online-based Orthopedic Visits, Picture Archiving and Communication System (PACS), Computer-Assisted Surgery (CAS), Deep Learning and Generative AI, big data, Augmented Reality, ambulatory surgery centers (ASCs), Virtual Care Technology, robotics, Biological Treatments and Patient-Specific Implants.
- Ask R Discovery
- Chat PDF
AI summaries and top papers from 250M+ research sources.