Application of Machine Learning in Radiative Heat Transfer
Application of Machine Learning in Radiative Heat Transfer
- Research Article
1
- 10.54254/2755-2721/51/20241165
- Mar 25, 2024
- Applied and Computational Engineering
With the rapid development of the Internet and the rise of e-commerce, commercial enterprises are faced with a large amount of data and a complex market environment. In this situation, machine learning, as a powerful tool, is widely used in the field of business analysis. In this dissertation, we take Amazon and eBay as examples to study the application of machine learning in the company's business analytics, focusing on its role in market prediction, customer behavior analysis and operation optimization. By analyzing the relevant cases, we find that machine learning technology plays an important role in helping companies make more accurate decisions and improve efficiency. Studying the application of Amazon machine learning in business analytics can promote in-depth research on the application of machine learning in business in academia, and promote the application and development of machine learning technology in other business scenarios. Overall, the application of machine learning in business analytics can help companies understand customer behavior, optimize operations, and improve sales results. However, there are still some challenges, such as data quality, algorithm selection and privacy protection. Therefore, further research and innovation are necessary to advance the development of machine learning applications in business analytics.
- Research Article
- 10.1002/cben.70012
- Jun 2, 2025
- ChemBioEng Reviews
This paper aims to review the machine learning (ML) applications in chemical engineering (ChemE) and provide perspectives for the future. First, the evolution of ML, data structures, and ML applications in ChemE were reviewed; then, the current state of the art in ML and its ChemE applications were summarized. Finally, a perspective for the future developments, including recently popularized tools like generative artificial intelligence (AI) and large language models (LLMs), as well as major challenges and limitations, was provided. Although the initial applications were mainly on fault detection, signal processing, and process modeling, the focus had been extended to other fields involving material development, property estimation, and performance analysis in later years with the use of more complex models and datasets. In future, new developments like LLMs will likely spread more; the other new applications like automated ML, physics‐informed ML, and transfer learning, as well as field‐specific databases, will also get more attention. ML applications in ChemE‐related fields, like new energy technologies, environmental issues, and new material discovery, are expected to grow further.
- Conference Article
- 10.1109/icidca56705.2023.10100252
- Mar 14, 2023
Machine learning in medical applications is one of the focus areas of the researchers these days. Machine Learning with the application of Artificial Intelligence is not only giving solutions to the complex problems but also revolutionised the medical field. The main motive of machine learning is to improve its learning process over time by taking all the relevant data and information in the form of different inputs and observations. This study reviews different medical disease prediction and detection techniques with the help of distinct deep learning & machine learning models. The problems related to medical diseases, like cancer related diseases, heart, lung, thyroid and kidney diseases are being discussed in this article. Detection and analysing of medical diseases is one of the prominent applications of machine and deep learning. Deep learning as a technology offers a huge set of different and innovative tools which are relevant to different issues faced in the field of medical image processing. This study will discuss about the applications of Machine Learning, and then discuss some of the advancements done in different diseases like breast cancer, heart disease, skin disease, kidney disease etc.
- Research Article
- 10.1016/j.ijbiomac.2025.142374
- May 1, 2025
- International journal of biological macromolecules
Application of explainable machine learning in the production of pullulan by Aureobasidium pullulans CGMCCNO.7055.
- Research Article
- 10.1145/3729394
- Jun 19, 2025
- Proceedings of the ACM on Software Engineering
Machine learning (ML) applications have become an integral part of our lives. ML applications extensively use floating-point computation and involve very large/small numbers; thus, maintaining the numerical stability of such complex computations remains an important challenge. Numerical bugs can lead to system crashes, incorrect output, and wasted computing resources. In this paper, we introduce a novel idea, namely soft assertions (SA) , to encode safety/error conditions for the places where numerical instability can occur. A soft assertion is an ML model automatically trained using the dataset obtained during unit testing of unstable functions. Given the values at the unstable function in an ML application, a soft assertion reports how to change these values in order to trigger the instability. We then use the output of soft assertions as signals to effectively mutate inputs to trigger numerical instability in ML applications. In the evaluation, we used the GRIST benchmark, a total of 79 programs, as well as 15 real-world ML applications from GitHub. We compared our tool with 5 state-of-the-art (SOTA) fuzzers. We found all the GRIST bugs and outperformed the baselines. We found 13 numerical bugs in real-world code, one of which had already been confirmed by the GitHub developers. While the baselines mostly found the bugs that report NaN and INF, our tool found numerical bugs with incorrect output. We showed one case where the Tumor Detection Model , trained on Brain MRI images, should have predicted ”tumor”, but instead, it incorrectly predicted ”no tumor” due to the numerical bugs. Our replication package is located at https://figshare.com/s/6528d21ccd28bea94c32.
- Conference Article
54
- 10.1109/issrew.2018.00024
- Oct 1, 2018
Machine Learning (ML) applications have emerged as the killer applications for next generation hardware and software platforms, and there is a lot of interest in software frameworks to build such applications. TensorFlow is a high-level dataflow framework for building ML applications and has become the most popular one in the recent past. ML applications are also being increasingly used in safety-critical systems such as self-driving cars and home robotics. Therefore, there is a compelling need to evaluate the resilience of ML applications built using frameworks such as TensorFlow. In this paper, we build a high-level fault injection framework for TensorFlow called TensorFI for evaluating the resilience of ML applications. TensorFI is flexible, easy to use, and portable. It also allows ML application programmers to explore the effects of different parameters and algorithms on error resilience.
- Research Article
2
- 10.3390/info14010053
- Jan 16, 2023
- Information
Machine learning (ML) techniques discover knowledge from large amounts of data. Modeling in ML is becoming essential to software systems in practice. The accuracy and efficiency of ML models have been focused on ML research communities, while there is less attention on validating the qualities of ML models. Validating ML applications is a challenging and time-consuming process for developers since prediction accuracy heavily relies on generated models. ML applications are written by relatively more data-driven programming based on the black box of ML frameworks. All of the datasets and the ML application need to be individually investigated. Thus, the ML validation tasks take a lot of time and effort. To address this limitation, we present a novel quality validation technique that increases the reliability for ML models and applications, called MLVal. Our approach helps developers inspect the training data and the generated features for the ML model. A data validation technique is important and beneficial to software quality since the quality of the input data affects speed and accuracy for training and inference. Inspired by software debugging/validation for reproducing the potential reported bugs, MLVal takes as input an ML application and its training datasets to build the ML models, helping ML application developers easily reproduce and understand anomalies in the ML application. We have implemented an Eclipse plugin for MLVal that allows developers to validate the prediction behavior of their ML applications, the ML model, and the training data on the Eclipse IDE. In our evaluation, we used 23,500 documents in the bioengineering research domain. We assessed the ability of the MLVal validation technique to effectively help ML application developers: (1) investigate the connection between the produced features and the labels in the training model, and (2) detect errors early to secure the quality of models from better data. Our approach reduces the cost of engineering efforts to validate problems, improving data-centric workflows of the ML application development.
- Book Chapter
1
- 10.1016/b978-0-323-85159-6.50283-9
- Jan 1, 2022
- Computer Aided Chemical Engineering
Application of machine learning and big data for smart energy management in manufacturing
- Research Article
15
- 10.1002/widm.1476
- Aug 12, 2022
- WIREs Data Mining and Knowledge Discovery
Since the beginning of the 21st century, the fields of astronomy and astrophysics have experienced significant growth at observational and computational levels, leading to the acquisition of increasingly huge volumes of data. In order to process this vast quantity of information, artificial intelligence (AI) techniques are being combined with data mining to detect patterns with the aim of modeling, classifying or predicting the behavior of certain astronomical phenomena or objects. Parallel to the exponential development of the aforementioned techniques, the scientific output related to the application of AI and machine learning (ML) in astronomy and astrophysics has also experienced considerable growth in recent years. Therefore, the increasingly abundant articles make it difficult to monitor this field in terms of which research topics are the most prolific or novel, or which countries or authors are leading them. In this article, a text‐mining‐based scientometric analysis of scientific documents published over the last three decades on the application of AI and ML in the fields of astronomy and astrophysics is presented. The VOSviewer software and data from the Web of Science (WoS) are used to elucidate the evolution of publications in this research field, their distribution by country (including co‐authorship), the most relevant topics addressed, and the most cited elements and most significant co‐citations according to publication source and authorship. The obtained results demonstrate how application of AI/ML to the fields of astronomy/astrophysics represents an established and rapidly growing field of research that is crucial to obtaining scientific understanding of the universe.This article is categorized under: Algorithmic Development > Text Mining Technologies > Machine Learning Application Areas > Science and Technology
- Research Article
33
- 10.1016/j.matpr.2021.12.101
- Dec 18, 2021
- Materials Today: Proceedings
Machine learning applications in healthcare sector: An overview
- Book Chapter
8
- 10.1007/978-3-642-05224-8_3
- Jan 1, 2009
Transfer learning is a new machine learning and data mining framework that allows the training and test data to come from different distributions or feature spaces. We can find many novel applications of machine learning and data mining where transfer learning is necessary. While much has been done in transfer learning in text classification and reinforcement learning, there has been a lack of documented success stories of novel applications of transfer learning in other areas. In this invited article, I will argue that transfer learning is in fact quite ubiquitous in many real world applications. In this article, I will illustrate this point through an overview of a broad spectrum of applications of transfer learning that range from collaborative filtering to sensor based location estimation and logical action model learning for AI planning. I will also discuss some potential future directions of transfer learning.KeywordsAction ModelTarget DomainRating MatrixCollaborative FilterTransfer LearningThese keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.
- Research Article
4
- 10.1186/s40246-022-00376-1
- Feb 18, 2022
- Human Genomics
Identification of genomic signals as indicators for functional genomic elements is one of the areas that received early and widespread application of machine learning methods. With time, the methods applied grew in variety and generally exhibited a tendency to improve their ability to identify some major genomic and transcriptomics signals. The evolution of machine learning in genomics followed a similar path to applications of machine learning in other fields. These were impacted in a major way by three dominant developments, namely an enormous increase in availability and quality of data, a significant increase in computational power available to machine learning applications, and finally, new machine learning paradigms, of which deep learning is the most well-known example. It is not easy in general to distinguish factors leading to improvements in results of applications of machine learning. This is even more so in the field of genomics, where the advent of next-generation sequencing and the increased ability to perform functional analysis of raw data have had a major effect on the applicability of machine learning in OMICS fields. In this paper, we survey the results from a subset of published work in application of machine learning in the recognition of genomic signals and regions in human genome and summarize some lessons learnt from this endeavor. There is no doubt that a significant progress has been made both in terms of accuracy and reliability of models. Questions remain however whether the progress has been sufficient and what these developments bring to the field of genomics in general and human genomics in particular. Improving usability, interpretability and accuracy of models remains an important open challenge for current and future research in application of machine learning and more generally of artificial intelligence methods in genomics.
- Conference Article
7
- 10.1109/ccwc47524.2020.9031161
- Jan 1, 2020
A cost effective, easily scalable, and application independent FPGA cluster co-processing platform for machine learning (ML) applications is proposed. The work in this paper focuses on delivering an economical platform for the researchers and developers without the knowledge of FPGAs who seek budget friendly solutions to increase the performance of their ML applications without relying on the highly expensive solutions available in the today's FPGA market. The approach consists of two main parts, with a CPU based host machine learning application and the FPGA cluster sharing the heavy workload through an Ethernet protocol. An experiment was conducted using a perceptron layer implementation on the hardware and the test results show the execution time improvements due to the utilization of the proposed cluster approach. The solution proposed is easily scalable with any FPGA platform with an Ethernet support.
- Research Article
1
- 10.57020/ject.1475566
- Dec 31, 2024
- Journal of Emerging Computer Technologies
Machine learning has become an increasingly popular area of research in the field of education, with potential applications in various aspects of higher education curriculum design. This study aims to review the current applications of AI in the curriculum design of higher education. We conducted an initial search for articles on the application of machine learning in curriculum design in higher education. This involved searching three core educational databases, including the Educational Research Resources Information Centre (ERIC), the British Education Index (BEI), and Education Research Complete, to identify relevant literature. Subsequently, this study performed network analysis on the included literature to gain a deeper understanding of the common themes and topics within the field. The results showed a growing trend in publishing research on the application of machine learning within the educational domain. Our review pinpointed merely 11 publications specifically targeting the application of machine learning in higher education course design, with only three being peer-reviewed articles. Through the word cloud visualization, we discerned the most prominent keywords to be AI, foreign countries, pedagogy, online courses, e-learning, and course design. Collectively, these keywords underscore the significance of AI in molding the educational landscape, as well as the expanding tendency to incorporate AI technologies into online and technology-enhanced learning experiences. Although there is a significant amount of research on the application of machine learning in education, the literature on its specific use in higher education course design still needs to be expanded. Our review identified only a small number of studies that directly focused on this topic, and among them. The network analysis generated from the included literature highlights important themes related to student learning and performance and the use of models and algorithms. However, there is still a need for further research in this area to fully understand the potential of machine learning in higher education course design. This study would contribute literature in this specific field. The review can update teacher’s awareness of using machine learning in teaching practice. Additionally, it implies more and more researchers conduct related research in this area. Future studies should consider the limitations of the existing literature and explore new approaches to incorporate machine learning into curriculum design to improve student learning outcomes.
- Conference Article
1
- 10.2514/6.1993-139
- Jan 11, 1993
Axisymmetric radiative heat transfer calculations for flows in chemical non-equilibrium
- Ask R Discovery
- Chat PDF
AI summaries and top papers from 250M+ research sources.